Dynamic subgrid scale model of large eddy simulation of cross bundle flows
International Nuclear Information System (INIS)
Hassan, Y.A.; Barsamian, H.R.
1996-01-01
The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization
A Lagrangian dynamic subgrid-scale model turbulence
Meneveau, C.; Lund, T. S.; Cabot, W.
1994-01-01
A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.
Modeling Subgrid Scale Droplet Deposition in Multiphase-CFD
Agostinelli, Giulia; Baglietto, Emilio
2017-11-01
The development of first-principle-based constitutive equations for the Eulerian-Eulerian CFD modeling of annular flow is a major priority to extend the applicability of multiphase CFD (M-CFD) across all two-phase flow regimes. Two key mechanisms need to be incorporated in the M-CFD framework, the entrainment of droplets from the liquid film, and their deposition. Here we focus first on the aspect of deposition leveraging a separate effects approach. Current two-field methods in M-CFD do not include appropriate local closures to describe the deposition of droplets in annular flow conditions. As many integral correlations for deposition have been proposed for lumped parameters methods applications, few attempts exist in literature to extend their applicability to CFD simulations. The integral nature of the approach limits its applicability to fully developed flow conditions, without geometrical or flow variations, therefore negating the scope of CFD application. A new approach is proposed here that leverages local quantities to predict the subgrid-scale deposition rate. The methodology is first tested into a three-field approach CFD model.
Toy, M. D.; Olson, J.; Kenyon, J.; Smirnova, T. G.; Brown, J. M.
2017-12-01
The accuracy of wind forecasts in numerical weather prediction (NWP) models is improved when the drag forces imparted on atmospheric flow by subgrid-scale orography are included. Without such parameterizations, only the terrain resolved by the model grid, along with the small-scale obstacles parameterized by the roughness lengths can have an effect on the flow. This neglects the impacts of subgrid-scale terrain variations, which typically leads to wind speeds that are too strong. Using statistical information about the subgrid-scale orography, such as the mean and variance of the topographic height within a grid cell, the drag forces due to flow blocking, gravity wave drag, and turbulent form drag are estimated and distributed vertically throughout the grid cell column. We recently implemented the small-scale gravity wave drag paramterization of Steeneveld et al. (2008) and Tsiringakis et al. (2017) for stable planetary boundary layers, and the turbulent form drag parameterization of Beljaars et al. (2004) in the High-Resolution Rapid Refresh (HRRR) NWP model developed at the National Oceanic and Atmospheric Administration (NOAA). As a result, a high surface wind speed bias in the model has been reduced and small improvement to the maintenance of stable layers has also been found. We present the results of experiments with the subgrid-scale orographic drag parameterization for the regional HRRR model, as well as for a global model in development at NOAA, showing the direct and indirect impacts.
Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows
Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel
2017-11-01
We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.
A simple dynamic subgrid-scale model for LES of particle-laden turbulence
Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz
2017-04-01
In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.
International Nuclear Information System (INIS)
Barsamian, H.R.; Hassan, Y.A.
1996-01-01
Turbulence is one of the most commonly occurring phenomena of engineering interest in the field of fluid mechanics. Since most flows are turbulent, there is a significant payoff for improved predictive models of turbulence. One area of concern is the turbulent buffeting forces experienced by the tubes in steam generators of nuclear power plants. Although the Navier-Stokes equations are able to describe turbulent flow fields, the large number of scales of turbulence limit practical flow field calculations with current computing power. The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (Smagorinsky, 1963) (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization
Simulations of mixing in Inertial Confinement Fusion with front tracking and sub-grid scale models
Rana, Verinder; Lim, Hyunkyung; Melvin, Jeremy; Cheng, Baolian; Glimm, James; Sharp, David
2015-11-01
We present two related results. The first discusses the Richtmyer-Meshkov (RMI) and Rayleigh-Taylor instabilities (RTI) and their evolution in Inertial Confinement Fusion simulations. We show the evolution of the RMI to the late time RTI under transport effects and tracking. The role of the sub-grid scales helps capture the interaction of turbulence with diffusive processes. The second assesses the effects of concentration on the physics model and examines the mixing properties in the low Reynolds number hot spot. We discuss the effect of concentration on the Schmidt number. The simulation results are produced using the University of Chicago code FLASH and Stony Brook University's front tracking algorithm.
Large eddy simulation of new subgrid scale model for three-dimensional bundle flows
International Nuclear Information System (INIS)
Barsamian, H.R.; Hassan, Y.A.
2004-01-01
Having led to increased inefficiencies and power plant shutdowns fluid flow induced vibrations within heat exchangers are of great concern due to tube fretting-wear or fatigue failures. Historically, scaling law and measurement accuracy problems were encountered for experimental analysis at considerable effort and expense. However, supercomputers and accurate numerical methods have provided reliable results and substantial decrease in cost. In this investigation Large Eddy Simulation has been successfully used to simulate turbulent flow by the numeric solution of the incompressible, isothermal, single phase Navier-Stokes equations. The eddy viscosity model and a new subgrid scale model have been utilized to model the smaller eddies in the flow domain. A triangular array flow field was considered and numerical simulations were performed in two- and three-dimensional fields, and were compared to experimental findings. Results show good agreement of the numerical findings to that of the experimental, and solutions obtained with the new subgrid scale model represent better energy dissipation for the smaller eddies. (author)
A dynamic global-coefficient mixed subgrid-scale model for large-eddy simulation of turbulent flows
International Nuclear Information System (INIS)
Singh, Satbir; You, Donghyun
2013-01-01
Highlights: ► A new SGS model is developed for LES of turbulent flows in complex geometries. ► A dynamic global-coefficient SGS model is coupled with a scale-similarity model. ► Overcome some of difficulties associated with eddy-viscosity closures. ► Does not require averaging or clipping of the model coefficient for stabilization. ► The predictive capability is demonstrated in a number of turbulent flow simulations. -- Abstract: A dynamic global-coefficient mixed subgrid-scale eddy-viscosity model for large-eddy simulation of turbulent flows in complex geometries is developed. In the present model, the subgrid-scale stress is decomposed into the modified Leonard stress, cross stress, and subgrid-scale Reynolds stress. The modified Leonard stress is explicitly computed assuming a scale similarity, while the cross stress and the subgrid-scale Reynolds stress are modeled using the global-coefficient eddy-viscosity model. The model coefficient is determined by a dynamic procedure based on the global-equilibrium between the subgrid-scale dissipation and the viscous dissipation. The new model relieves some of the difficulties associated with an eddy-viscosity closure, such as the nonalignment of the principal axes of the subgrid-scale stress tensor and the strain rate tensor and the anisotropy of turbulent flow fields, while, like other dynamic global-coefficient models, it does not require averaging or clipping of the model coefficient for numerical stabilization. The combination of the global-coefficient eddy-viscosity model and a scale-similarity model is demonstrated to produce improved predictions in a number of turbulent flow simulations
Energy Technology Data Exchange (ETDEWEB)
Fang, L. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China); Sun, X.Y. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Liu, Y.W., E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China)
2016-12-09
In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology. - Highlights: • The concepts of assumption and restriction in the SGS modelling procedure are defined. • A criterion of orthogonality on the assumption and restrictions is derived. • Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion.
Rotating Turbulent Flow Simulation with LES and Vreman Subgrid-Scale Models in Complex Geometries
Directory of Open Access Journals (Sweden)
Tao Guo
2014-07-01
Full Text Available The large eddy simulation (LES method based on Vreman subgrid-scale model and SIMPIEC algorithm were applied to accurately capture the flowing character in Francis turbine passage under the small opening condition. The methodology proposed is effective to understand the flow structure well. It overcomes the limitation of eddy-viscosity model which is excessive, dissipative. Distributions of pressure, velocity, and vorticity as well as some special flow structure in guide vane near-wall zones and blade passage were gained. The results show that the tangential velocity component of fluid has absolute superiority under small opening condition. This situation aggravates the impact between the wake vortices that shed from guide vanes. The critical influence on the balance of unit by spiral vortex in blade passage and the nonuniform flow around guide vane, combined with the transmitting of stress wave, has been confirmed.
International Nuclear Information System (INIS)
Vold, Erik L.; Scannapieco, Tony J.
2007-01-01
A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.
Vollant, A.; Balarac, G.; Corre, C.
2017-09-01
New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.
Study of subgrid-scale velocity models for reacting and nonreacting flows
Langella, I.; Doan, N. A. K.; Swaminathan, N.; Pope, S. B.
2018-05-01
A study is conducted to identify advantages and limitations of existing large-eddy simulation (LES) closures for the subgrid-scale (SGS) kinetic energy using a database of direct numerical simulations (DNS). The analysis is conducted for both reacting and nonreacting flows, different turbulence conditions, and various filter sizes. A model, based on dissipation and diffusion of momentum (LD-D model), is proposed in this paper based on the observed behavior of four existing models. Our model shows the best overall agreements with DNS statistics. Two main investigations are conducted for both reacting and nonreacting flows: (i) an investigation on the robustness of the model constants, showing that commonly used constants lead to a severe underestimation of the SGS kinetic energy and enlightening their dependence on Reynolds number and filter size; and (ii) an investigation on the statistical behavior of the SGS closures, which suggests that the dissipation of momentum is the key parameter to be considered in such closures and that dilatation effect is important and must be captured correctly in reacting flows. Additional properties of SGS kinetic energy modeling are identified and discussed.
Analysis and modeling of subgrid scalar mixing using numerical data
Girimaji, Sharath S.; Zhou, YE
1995-01-01
Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.
Recursive renormalization group theory based subgrid modeling
Zhou, YE
1991-01-01
Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.
Canuto, V. M.
1994-01-01
The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The
Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H
2014-07-01
In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.
International Nuclear Information System (INIS)
Inagaki, Masahide; Abe, Ken-ichi
2017-01-01
Highlights: • An anisotropy-resolving subgrid-scale model, covering a wide range of grid resolutions, is improved. • The new model enhances its applicability to flows in the laminar-turbulent transition region. • A mixed-timescale subgrid-scale model is used as the eddy viscosity model. • The proposed model successfully predicts the channel flows at transitional Reynolds numbers. • The influence of the definition of the grid-filter width is also investigated. - Abstract: Some types of mixed subgrid-scale (SGS) models combining an isotropic eddy-viscosity model and a scale-similarity model can be used to effectively improve the accuracy of large eddy simulation (LES) in predicting wall turbulence. Abe (2013) has recently proposed a stabilized mixed model that maintains its computational stability through a unique procedure that prevents the energy transfer between the grid-scale (GS) and SGS components induced by the scale-similarity term. At the same time, since this model can successfully predict the anisotropy of the SGS stress, the predictive performance, particularly at coarse grid resolutions, is remarkably improved in comparison with other mixed models. However, since the stabilized anisotropy-resolving SGS model includes a transport equation of the SGS turbulence energy, k SGS , containing a production term proportional to the square root of k SGS , its applicability to flows with both laminar and turbulent regions is not so high. This is because such a production term causes k SGS to self-reproduce. Consequently, the laminar–turbulent transition region predicted by this model depends on the inflow or initial condition of k SGS . To resolve these issues, in the present study, the mixed-timescale (MTS) SGS model proposed by Inagaki et al. (2005) is introduced into the stabilized mixed model as the isotropic eddy-viscosity part and the production term in the k SGS transport equation. In the MTS model, the SGS turbulence energy, k es , estimated by
Statistical dynamical subgrid-scale parameterizations for geophysical flows
International Nuclear Information System (INIS)
O'Kane, T J; Frederiksen, J S
2008-01-01
Simulations of both atmospheric and oceanic circulations at given finite resolutions are strongly dependent on the form and strengths of the dynamical subgrid-scale parameterizations (SSPs) and in particular are sensitive to subgrid-scale transient eddies interacting with the retained scale topography and the mean flow. In this paper, we present numerical results for SSPs of the eddy-topographic force, stochastic backscatter, eddy viscosity and eddy-mean field interaction using an inhomogeneous statistical turbulence model based on a quasi-diagonal direct interaction approximation (QDIA). Although the theoretical description on which our model is based is for general barotropic flows, we specifically focus on global atmospheric flows where large-scale Rossby waves are present. We compare and contrast the closure-based results with an important earlier heuristic SSP of the eddy-topographic force, based on maximum entropy or statistical canonical equilibrium arguments, developed specifically for general ocean circulation models (Holloway 1992 J. Phys. Oceanogr. 22 1033-46). Our results demonstrate that where strong zonal flows and Rossby waves are present, such as in the atmosphere, maximum entropy arguments are insufficient to accurately parameterize the subgrid contributions due to eddy-eddy, eddy-topographic and eddy-mean field interactions. We contrast our atmospheric results with findings for the oceans. Our study identifies subgrid-scale interactions that are currently not parameterized in numerical atmospheric climate models, which may lead to systematic defects in the simulated circulations.
Ida, Masato; Taniguchi, Nobuyuki
2003-09-01
This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.
Di Sarli, Valeria; Di Benedetto, Almerinda; Russo, Gennaro
2010-08-15
In this work, an assessment of different sub-grid scale (sgs) combustion models proposed for large eddy simulation (LES) of steady turbulent premixed combustion (Colin et al., Phys. Fluids 12 (2000) 1843-1863; Flohr and Pitsch, Proc. CTR Summer Program, 2000, pp. 61-82; Kim and Menon, Combust. Sci. Technol. 160 (2000) 119-150; Charlette et al., Combust. Flame 131 (2002) 159-180; Pitsch and Duchamp de Lageneste, Proc. Combust. Inst. 29 (2002) 2001-2008) was performed to identify the model that best predicts unsteady flame propagation in gas explosions. Numerical results were compared to the experimental data by Patel et al. (Proc. Combust. Inst. 29 (2002) 1849-1854) for premixed deflagrating flame in a vented chamber in the presence of three sequential obstacles. It is found that all sgs combustion models are able to reproduce qualitatively the experiment in terms of step of flame acceleration and deceleration around each obstacle, and shape of the propagating flame. Without adjusting any constants and parameters, the sgs model by Charlette et al. also provides satisfactory quantitative predictions for flame speed and pressure peak. Conversely, the sgs combustion models other than Charlette et al. give correct predictions only after an ad hoc tuning of constants and parameters. Copyright 2010 Elsevier B.V. All rights reserved.
2018-02-15
conservation equations. The closure problem hinges on the evaluation of the filtered chemical production rates. In MRA/MSR, simultaneous large-eddy... simultaneous , constrained large-eddy simulations at three different mesh levels as a means of connecting reactive scalar information at different...functions of a locally normalized subgrid Damköhler number (a measure of the distribution of inverse chemical time scales in the neighborhood of a
Energy Technology Data Exchange (ETDEWEB)
Jablonowski, Christiane [Univ. of Michigan, Ann Arbor, MI (United States)
2015-12-14
The goals of this project were to (1) assess and quantify the sensitivity and scale-dependency of unresolved subgrid-scale mixing processes in NCAR’s Community Earth System Model (CESM), and (2) to improve the accuracy and skill of forthcoming CESM configurations on modern cubed-sphere and variable-resolution computational grids. The research thereby contributed to the description and quantification of uncertainties in CESM’s dynamical cores and their physics-dynamics interactions.
Thiry, Olivier; Winckelmans, Grégoire
2016-02-01
In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that
Large-eddy simulation with accurate implicit subgrid-scale diffusion
B. Koren (Barry); C. Beets
1996-01-01
textabstractA method for large-eddy simulation is presented that does not use an explicit subgrid-scale diffusion term. Subgrid-scale effects are modelled implicitly through an appropriate monotone (in the sense of Spekreijse 1987) discretization method for the advective terms. Special attention is
International Nuclear Information System (INIS)
Premnath, Kannan N; Pattison, Martin J; Banerjee, Sanjoy
2013-01-01
Lattice Boltzmann method (LBM) is a kinetic based numerical scheme for the simulation of fluid flow. While the approach has attracted considerable attention during the last two decades, there is a need for systematic investigation of its applicability for complex canonical turbulent flow problems of engineering interest, where the nature of the numerical properties of the underlying scheme plays an important role for their accurate solution. In this paper, we discuss and evaluate a LBM based on a multiblock approach for efficient large eddy simulation of three-dimensional external flow past a circular cylinder in the transitional regime characterized by the presence of multiple scales. For enhanced numerical stability at higher Reynolds numbers, a multiple relaxation time formulation is considered. The effect of subgrid scales is represented by means of a Smagorinsky eddy-viscosity model, where the model coefficient is computed locally by means of a dynamic procedure, providing better representation of flow physics with reduced empiricism. Simulations are performed for a Reynolds number of 3900 based on the free stream velocity and cylinder diameter for which prior data is available for comparison. The presence of laminar boundary layer which separates into a pair of shear layers that evolve into turbulent wakes impose particular challenge for numerical methods for this condition. The relatively low numerical dissipation introduced by the inherently parallel and second-order accurate LBM is an important computational asset in this regard. Computations using five different grid levels, where the various blocks are suitably aligned to resolve multiscale flow features show that the structure of the recirculation region is well reproduced and the statistics of the mean flow and turbulent fluctuations are in satisfactory agreement with prior data. (paper)
Energy Technology Data Exchange (ETDEWEB)
Premnath, Kannan N [Department of Mechanical Engineering, University of Colorado Denver, 1200 Larimer Street, Denver, CO 80217 (United States); Pattison, Martin J [HyPerComp Inc., 2629 Townsgate Road, Suite 105, Westlake Village, CA 91361 (United States); Banerjee, Sanjoy, E-mail: kannan.premnath@ucdenver.edu, E-mail: kannan.np@gmail.com [Department of Chemical Engineering, City College of New York, City University of New York, New York, NY 10031 (United States)
2013-10-15
Lattice Boltzmann method (LBM) is a kinetic based numerical scheme for the simulation of fluid flow. While the approach has attracted considerable attention during the last two decades, there is a need for systematic investigation of its applicability for complex canonical turbulent flow problems of engineering interest, where the nature of the numerical properties of the underlying scheme plays an important role for their accurate solution. In this paper, we discuss and evaluate a LBM based on a multiblock approach for efficient large eddy simulation of three-dimensional external flow past a circular cylinder in the transitional regime characterized by the presence of multiple scales. For enhanced numerical stability at higher Reynolds numbers, a multiple relaxation time formulation is considered. The effect of subgrid scales is represented by means of a Smagorinsky eddy-viscosity model, where the model coefficient is computed locally by means of a dynamic procedure, providing better representation of flow physics with reduced empiricism. Simulations are performed for a Reynolds number of 3900 based on the free stream velocity and cylinder diameter for which prior data is available for comparison. The presence of laminar boundary layer which separates into a pair of shear layers that evolve into turbulent wakes impose particular challenge for numerical methods for this condition. The relatively low numerical dissipation introduced by the inherently parallel and second-order accurate LBM is an important computational asset in this regard. Computations using five different grid levels, where the various blocks are suitably aligned to resolve multiscale flow features show that the structure of the recirculation region is well reproduced and the statistics of the mean flow and turbulent fluctuations are in satisfactory agreement with prior data. (paper)
High-resolution subgrid models: background, grid generation, and implementation
Sehili, Aissa; Lang, Günther; Lippert, Christoph
2014-04-01
The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.
Accounting for subgrid scale topographic variations in flood propagation modeling using MODFLOW
DEFF Research Database (Denmark)
Milzow, Christian; Kinzelbach, W.
2010-01-01
To be computationally viable, grid-based spatially distributed hydrological models of large wetlands or floodplains must be set up using relatively large cells (order of hundreds of meters to kilometers). Computational costs are especially high when considering the numerous model runs or model time...
Filtered Mass Density Function for Subgrid Scale Modeling of Turbulent Diffusion Flames
National Research Council Canada - National Science Library
Givi, Peyman
2002-01-01
.... These equations were solved with a new Lagrangian Monte Carlo scheme. The model predictions were compared with results obtained via conventional LES closures and with direct numerical simulation (DNS...
A priori study of subgrid-scale flux of a passive scalar in isotropic homogeneous turbulence
International Nuclear Information System (INIS)
Chumakov, Sergei
2008-01-01
We perform a direct numerical simulation (DNS) of forced homogeneous isotropic turbulence with a passive scalar that is forced by mean gradient. The DNS data are used to study the properties of subgrid-scale flux of a passive scalar in the framework of large eddy simulation (LES), such as alignment trends between the flux, resolved, and subgrid-scale flow structures. It is shown that the direction of the flux is strongly coupled with the subgrid-scale stress axes rather than the resolved flow quantities such as strain, vorticity, or scalar gradient. We derive an approximate transport equation for the subgrid-scale flux of a scalar and look at the relative importance of the terms in the transport equation. A particular form of LES tensor-viscosity model for the scalar flux is investigated, which includes the subgrid-scale stress. Effect of different models for the subgrid-scale stress on the model for the subgrid-scale flux is studied.
A priori study of subgrid-scale flux of a passive scalar in isotropic homogeneous turbulence.
Chumakov, Sergei G
2008-09-01
We perform a direct numerical simulation (DNS) of forced homogeneous isotropic turbulence with a passive scalar that is forced by mean gradient. The DNS data are used to study the properties of subgrid-scale flux of a passive scalar in the framework of large eddy simulation (LES), such as alignment trends between the flux, resolved, and subgrid-scale flow structures. It is shown that the direction of the flux is strongly coupled with the subgrid-scale stress axes rather than the resolved flow quantities such as strain, vorticity, or scalar gradient. We derive an approximate transport equation for the subgrid-scale flux of a scalar and look at the relative importance of the terms in the transport equation. A particular form of LES tensor-viscosity model for the scalar flux is investigated, which includes the subgrid-scale stress. Effect of different models for the subgrid-scale stress on the model for the subgrid-scale flux is studied.
Avissar, Roni; Chen, Fei
1993-01-01
generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.
Matsui, H.; Buffett, B. A.
2017-12-01
The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.
International Nuclear Information System (INIS)
Sillman, S.; Logan, J.A.; Wofsy, S.C.
1990-01-01
A new approach to modeling regional air chemistry is presented for application to industrialized regions such as the continental US. Rural chemistry and transport are simulated using a coarse grid, while chemistry and transport in urban and power plant plumes are represented by detailed subgrid models. Emissions from urban and power plant sources are processed in generalized plumes where chemistry and dilution proceed for 8-12 hours before mixing with air in a large resolution element. A realistic fraction of pollutants reacts under high-NO x conditions, and NO x is removed significantly before dispersal. Results from this model are compared with results from grid odels that do not distinguish plumes and with observational data defining regional ozone distributions. Grid models with coarse resolution are found to artificially disperse NO x over rural areas, therefore overestimating rural levels of both NO x and O 3 . Regional net ozone production is too high in coarse grid models, because production of O 3 is more efficient per molecule of NO x in the low-concentration regime of rural areas than in heavily polluted plumes from major emission sources. Ozone levels simulated by this model are shown to agree with observations in urban plumes and in rural regions. The model reproduces accurately average regional and peak ozone concentrations observed during a 4-day ozone episode. Computational costs for the model are reduced 25-to 100-fold as compared to fine-mesh models
Subgrid-scale turbulence in shock-boundary layer flows
Jammalamadaka, Avinash; Jaberi, Farhad
2015-04-01
Data generated by direct numerical simulation (DNS) for a Mach 2.75 zero-pressure gradient turbulent boundary layer interacting with shocks of different intensities are used for a priori analysis of subgrid-scale (SGS) turbulence and various terms in the compressible filtered Navier-Stokes equations. The numerical method used for DNS is based on a hybrid scheme that uses a non-dissipative central scheme in the shock-free turbulent regions and a robust monotonicity-preserving scheme in the shock regions. The behavior of SGS stresses and their components, namely Leonard, Cross and Reynolds components, is examined in various regions of the flow for different shock intensities and filter widths. The backscatter in various regions of the flow is found to be significant only instantaneously, while the ensemble-averaged statistics indicate no significant backscatter. The budgets for the SGS kinetic energy equation are examined for a better understanding of shock-tubulence interactions at the subgrid level and also with the aim of providing useful information for one-equation LES models. A term-by-term analysis of SGS terms in the filtered total energy equation indicate that while each term in this equation is significant by itself, the net contribution by all of them is relatively small. This observation is consistent with our a posteriori analysis.
Modeling lightning-NOx chemistry on a sub-grid scale in a global chemical transport model
Directory of Open Access Journals (Sweden)
A. Gressent
2016-05-01
Full Text Available For the first time, a plume-in-grid approach is implemented in a chemical transport model (CTM to parameterize the effects of the nonlinear reactions occurring within high concentrated NOx plumes from lightning NOx emissions (LNOx in the upper troposphere. It is characterized by a set of parameters including the plume lifetime, the effective reaction rate constant related to NOx–O3 chemical interactions, and the fractions of NOx conversion into HNO3 within the plume. Parameter estimates were made using the Dynamical Simple Model of Atmospheric Chemical Complexity (DSMACC box model, simple plume dispersion simulations, and the 3-D Meso-NH (non-hydrostatic mesoscale atmospheric model. In order to assess the impact of the LNOx plume approach on the NOx and O3 distributions on a large scale, simulations for the year 2006 were performed using the GEOS-Chem global model with a horizontal resolution of 2° × 2.5°. The implementation of the LNOx parameterization implies an NOx and O3 decrease on a large scale over the region characterized by a strong lightning activity (up to 25 and 8 %, respectively, over central Africa in July and a relative increase downwind of LNOx emissions (up to 18 and 2 % for NOx and O3, respectively, in July. The calculated variability in NOx and O3 mixing ratios around the mean value according to the known uncertainties in the parameter estimates is at a maximum over continental tropical regions with ΔNOx [−33.1, +29.7] ppt and ΔO3 [−1.56, +2.16] ppb, in January, and ΔNOx [−14.3, +21] ppt and ΔO3 [−1.18, +1.93] ppb, in July, mainly depending on the determination of the diffusion properties of the atmosphere and the initial NO mixing ratio injected by lightning. This approach allows us (i to reproduce a more realistic lightning NOx chemistry leading to better NOx and O3 distributions on the large scale and (ii to focus on other improvements to reduce remaining uncertainties from processes
Energy Technology Data Exchange (ETDEWEB)
Bogenschutz, Peter [National Center for Atmospheric Research, Boulder, CO (United States); Moeng, Chin-Hoh [National Center for Atmospheric Research, Boulder, CO (United States)
2015-10-13
The PI’s at the National Center for Atmospheric Research (NCAR), Chin-Hoh Moeng and Peter Bogenschutz, have primarily focused their time on the implementation of the Simplified-Higher Order Turbulence Closure (SHOC; Bogenschutz and Krueger 2013) to the Multi-scale Modeling Framework (MMF) global model and testing of SHOC on deep convective cloud regimes.
Hernandez Perez, Francisco E.; Lee, Bok Jik; Im, Hong G.; Fancello, Alessio; Donini, Andrea; van Oijen, Jeroen A.; de Goey, Philip H.
2017-01-01
Large eddy simulations of a turbulent premixed jet flame in a confined chamber were conducted using the flamelet-generated manifold technique for chemistry tabulation. The configuration is characterized by an off-center nozzle having an inner diameter of 10 mm, supplying a lean methane-air mixture with an equivalence ratio of 0.71 and a mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the manifold via burner-stabilized flamelets and the subgrid-scale (SGS) turbulencechemistry interaction is modeled via presumed probability density functions. Comparisons between numerical results and measured data show that a considerable improvement in the prediction of temperature is achieved when heat losses are included in the manifold, as compared to the adiabatic one. Additional improvement in the temperature predictions is obtained by incorporating radiative heat losses. Moreover, further enhancements in the LES predictions are achieved by employing SGS models based on transport equations, such as the SGS turbulence kinetic energy equation with dynamic coefficients. While the numerical results display good agreement up to a distance of 4 nozzle diameters downstream of the nozzle exit, the results become less satisfactory along the downstream, suggesting that further improvements in the modeling are required, among which a more accurate model for the SGS variance of progress variable can be relevant.
Hernandez Perez, Francisco E.
2017-01-05
Large eddy simulations of a turbulent premixed jet flame in a confined chamber were conducted using the flamelet-generated manifold technique for chemistry tabulation. The configuration is characterized by an off-center nozzle having an inner diameter of 10 mm, supplying a lean methane-air mixture with an equivalence ratio of 0.71 and a mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the manifold via burner-stabilized flamelets and the subgrid-scale (SGS) turbulencechemistry interaction is modeled via presumed probability density functions. Comparisons between numerical results and measured data show that a considerable improvement in the prediction of temperature is achieved when heat losses are included in the manifold, as compared to the adiabatic one. Additional improvement in the temperature predictions is obtained by incorporating radiative heat losses. Moreover, further enhancements in the LES predictions are achieved by employing SGS models based on transport equations, such as the SGS turbulence kinetic energy equation with dynamic coefficients. While the numerical results display good agreement up to a distance of 4 nozzle diameters downstream of the nozzle exit, the results become less satisfactory along the downstream, suggesting that further improvements in the modeling are required, among which a more accurate model for the SGS variance of progress variable can be relevant.
Hernandez Perez, Francisco E.; Im, Hong G.; Lee, Bok Jik; Fancello, Alessio; Donini, Andrea; van Oijen, Jeroen A.; de Goey, L. Philip H.
2017-11-01
Large eddy simulations (LES) of a turbulent premixed jet flame in a confined chamber are performed employing the flamelet-generated manifold (FGM) method for tabulation of chemical kinetics and thermochemical properties, as well as the OpenFOAM framework for computational fluid dynamics. The burner has been experimentally studied by Lammel et al. (2011) and features an off-center nozzle, feeding a preheated lean methane-air mixture with an equivalence ratio of 0.71 and mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the FGM tabulation via burner-stabilized flamelets and the subgrid-scale (SGS) turbulence-chemistry interaction is modeled via presumed filtered density functions. The impact of heat loss inclusion as well as SGS modeling for both the SGS stresses and SGS variance of progress variable on the numerical results is investigated. Comparisons of the LES results against measurements show a significant improvement in the prediction of temperature when heat losses are incorporated into FGM. While further enhancements in the LES results are accomplished by using SGS models based on transported quantities and/or dynamically computed coefficients as compared to the Smagorinsky model, heat loss inclusion is more relevant. This research was sponsored by King Abdullah University of Science and Technology (KAUST) and made use of computational resources at KAUST Supercomputing Laboratory.
On the Representation of Subgrid Microtopography Effects in Process-based Hydrologic Models
Jan, A.; Painter, S. L.; Coon, E. T.
2017-12-01
Increased availability of high-resolution digital elevation are enabling process-based hydrologic modeling on finer and finer scales. However, spatial variability in surface elevation (microtopography) exists below the scale of a typical hyper-resolution grid cell and has the potential to play a significant role in water retention, runoff, and surface/subsurface interactions. Though the concept of microtopographic features (depressions, obstructions) and the associated implications on flow and discharge are well established, representing those effects in watershed-scale integrated surface/subsurface hydrology models remains a challenge. Using the complex and coupled hydrologic environment of the Arctic polygonal tundra as an example, we study the effects of submeter topography and present a subgrid model parameterized by small-scale spatial heterogeneities for use in hyper-resolution models with polygons at a scale of 15-20 meters forming the surface cells. The subgrid model alters the flow and storage terms in the diffusion wave equation for surface flow. We compare our results against sub-meter scale simulations (acts as a benchmark for our simulations) and hyper-resolution models without the subgrid representation. The initiation of runoff in the fine-scale simulations is delayed and the recession curve is slowed relative to simulated runoff using the hyper-resolution model with no subgrid representation. Our subgrid modeling approach improves the representation of runoff and water retention relative to models that ignore subgrid topography. We evaluate different strategies for parameterizing subgrid model and present a classification-based method to efficiently move forward to larger landscapes. This work was supported by the Interoperable Design of Extreme-scale Application Software (IDEAS) project and the Next-Generation Ecosystem Experiments-Arctic (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the
Directory of Open Access Journals (Sweden)
J.-I. Yano
2012-11-01
Full Text Available A generalized mass-flux formulation is presented, which no longer takes a limit of vanishing fractional areas for subgrid-scale components. The presented formulation is applicable to a~situation in which the scale separation is still satisfied, but fractional areas occupied by individual subgrid-scale components are no longer small. A self-consistent formulation is presented by generalizing the mass-flux formulation under the segmentally-constant approximation (SCA to the grid–scale variabilities. The present formulation is expected to alleviate problems arising from increasing resolutions of operational forecast models without invoking more extensive overhaul of parameterizations.
The present formulation leads to an analogy of the large-scale atmospheric flow with multi-component flows. This analogy allows a generality of including any subgrid-scale variability into the mass-flux parameterization under SCA. Those include stratiform clouds as well as cold pools in the boundary layer.
An important finding under the present formulation is that the subgrid-scale quantities are advected by the large-scale velocities characteristic of given subgrid-scale components (large-scale subcomponent flows, rather than by the total large-scale flows as simply defined by grid-box average. In this manner, each subgrid-scale component behaves as if like a component of multi-component flows. This formulation, as a result, ensures the lateral interaction of subgrid-scale variability crossing the grid boxes, which are missing in the current parameterizations based on vertical one-dimensional models, and leading to a reduction of the grid-size dependencies in its performance. It is shown that the large-scale subcomponent flows are driven by large-scale subcomponent pressure gradients. The formulation, as a result, furthermore includes a self-contained description of subgrid-scale momentum transport.
The main purpose of the present paper
Energy Technology Data Exchange (ETDEWEB)
Fang, Le [Laboratory of Mathematics and Physics, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Zhu, Ying [Laboratory of Mathematics and Physics, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Liu, Yangwei, E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Lu, Lipeng [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China)
2015-10-09
The non-equilibrium property in turbulence is a non-negligible problem in large-eddy simulation but has not yet been systematically considered. The generalization from equilibrium turbulence to non-equilibrium turbulence requires a clear recognition of the non-equilibrium property. As a preliminary step of this recognition, the present letter defines a typical non-equilibrium process, that is, the spectral non-equilibrium process, in homogeneous isotropic turbulence. It is then theoretically investigated by employing the skewness of grid-scale velocity gradient, which permits the decomposition of resolved velocity field into an equilibrium one and a time-reversed one. Based on this decomposition, an improved Smagorinsky model is proposed to correct the non-equilibrium behavior of the traditional Smagorinsky model. The present study is expected to shed light on the future studies of more generalized non-equilibrium turbulent flows. - Highlights: • A spectral non-equilibrium process in isotropic turbulence is defined theoretically. • A decomposition method is proposed to divide a non-equilibrium turbulence field. • An improved Smagorinsky model is proposed to correct the non-equilibrium behavior.
Directory of Open Access Journals (Sweden)
Weijian Guo
2015-05-01
Full Text Available Spatial variability plays an important role in nonlinear hydrologic processes. Due to the limitation of computational efficiency and data resolution, subgrid variability is usually assumed to be uniform for most grid-based rainfall-runoff models, which leads to the scale-dependence of model performances. In this paper, the scale effect on the Grid-Xinanjiang model was examined. The bias of the estimation of precipitation, runoff, evapotranspiration and soil moisture at the different grid scales, along with the scale-dependence of the effective parameters, highlights the importance of well representing the subgrid variability. This paper presents a subgrid parameterization method to incorporate the subgrid variability of the soil storage capacity, which is a key variable that controls runoff generation and partitioning in the Grid-Xinanjiang model. In light of the similar spatial pattern and physical basis, the soil storage capacity is correlated with the topographic index, whose spatial distribution can more readily be measured. A beta distribution is introduced to represent the spatial distribution of the soil storage capacity within the grid. The results derived from the Yanduhe Basin show that the proposed subgrid parameterization method can effectively correct the watershed soil storage capacity curve. Compared to the original Grid-Xinanjiang model, the model performances are quite consistent at the different grid scales when the subgrid variability is incorporated. This subgrid parameterization method reduces the recalibration necessity when the Digital Elevation Model (DEM resolution is changed. Moreover, it improves the potential for the application of the distributed model in the ungauged basin.
Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling
DEFF Research Database (Denmark)
Sarlak Chivaee, Hamid
2017-01-01
This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...
International Nuclear Information System (INIS)
Wang, B.; Bergstrom, D.J.
2002-01-01
The dynamic two-parameter mixed model (DTPMM) has been recently introduced in the large eddy simulation (LES). However, current approaches in the literatures are mathematically inconsistent. In this paper, the DTPMM has been optimized using the functional variational method. The mathematical inconsistency has been removed and a governing system of two integral equations for the model coefficients of the DTPMM and some significant features have been obtained. Coherent structures relating to the vortex motion of large vortices have been investigated, using the vortex λ 2 -definition of Jeong and Hussain (1995). The numerical results agrees with the classical wall law of von Karman (1939) and experimental correlation of Aydin and Leutheusser (1991). (author)
Subgrid models for mass and thermal diffusion in turbulent mixing
Energy Technology Data Exchange (ETDEWEB)
Sharp, David H [Los Alamos National Laboratory; Lim, Hyunkyung [STONY BROOK UNIV; Li, Xiao - Lin [STONY BROOK UNIV; Gilmm, James G [STONY BROOK UNIV
2008-01-01
We are concerned with the chaotic flow fields of turbulent mixing. Chaotic flow is found in an extreme form in multiply shocked Richtmyer-Meshkov unstable flows. The goal of a converged simulation for this problem is twofold: to obtain converged solutions for macro solution features, such as the trajectories of the principal shock waves, mixing zone edges, and mean densities and velocities within each phase, and also for such micro solution features as the joint probability distributions of the temperature and species concentration. We introduce parameterized subgrid models of mass and thermal diffusion, to define large eddy simulations (LES) that replicate the micro features observed in the direct numerical simulation (DNS). The Schmidt numbers and Prandtl numbers are chosen to represent typical liquid, gas and plasma parameter values. Our main result is to explore the variation of the Schmidt, Prandtl and Reynolds numbers by three orders of magnitude, and the mesh by a factor of 8 per linear dimension (up to 3200 cells per dimension), to allow exploration of both DNS and LES regimes and verification of the simulations for both macro and micro observables. We find mesh convergence for key properties describing the molecular level of mixing, including chemical reaction rates between the distinct fluid species. We find results nearly independent of Reynolds number for Re 300, 6000, 600K . Methodologically, the results are also new. In common with the shock capturing community, we allow and maintain sharp solution gradients, and we enhance these gradients through use of front tracking. In common with the turbulence modeling community, we include subgrid scale models with no adjustable parameters for LES. To the authors' knowledge, these two methodologies have not been previously combined. In contrast to both of these methodologies, our use of Front Tracking, with DNS or LES resolution of the momentum equation at or near the Kolmogorov scale, but without
Subgrid models for mass and thermal diffusion in turbulent mixing
International Nuclear Information System (INIS)
Lim, H; Yu, Y; Glimm, J; Li, X-L; Sharp, D H
2010-01-01
We propose a new method for the large eddy simulation (LES) of turbulent mixing flows. The method yields convergent probability distribution functions (PDFs) for temperature and concentration and a chemical reaction rate when applied to reshocked Richtmyer-Meshkov (RM) unstable flows. Because such a mesh convergence is an unusual and perhaps original capability for LES of RM flows, we review previous validation studies of the principal components of the algorithm. The components are (i) a front tracking code, FronTier, to control numerical mass diffusion and (ii) dynamic subgrid scale (SGS) models to compensate for unresolved scales in the LES. We also review the relevant code comparison studies. We compare our results to a simple model based on 1D diffusion, taking place in the geometry defined statistically by the interface (the 50% isoconcentration surface between the two fluids). Several conclusions important to physics could be drawn from our study. We model chemical reactions with no closure approximations beyond those in the LES of the fluid variables itself, and as with dynamic SGS models, these closures contain no adjustable parameters. The chemical reaction rate is specified by the joint PDF for temperature and concentration. We observe a bimodal distribution for the PDF and we observe significant dependence on fluid transport parameters.
Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map
AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong
2017-04-01
The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by
Sensitivity test of parameterizations of subgrid-scale orographic form drag in the NCAR CESM1
Liang, Yishuang; Wang, Lanning; Zhang, Guang Jun; Wu, Qizhong
2017-05-01
Turbulent drag caused by subgrid orographic form drag has significant effects on the atmosphere. It is represented through parameterization in large-scale numerical prediction models. An indirect parameterization scheme, the Turbulent Mountain Stress scheme (TMS), is currently used in the National Center for Atmospheric Research Community Earth System Model v1.0.4. In this study we test a direct scheme referred to as BBW04 (Beljaars et al. in Q J R Meteorol Soc 130:1327-1347, 10.1256/qj.03.73), which has been used in several short-term weather forecast models and earth system models. Results indicate that both the indirect and direct schemes increase surface wind stress and improve the model's performance in simulating low-level wind speed over complex orography compared to the simulation without subgrid orographic effect. It is shown that the TMS scheme produces a more intense wind speed adjustment, leading to lower wind speed near the surface. The low-level wind speed by the BBW04 scheme agrees better with the ERA-Interim reanalysis and is more sensitive to complex orography as a direct method. Further, the TMS scheme increases the 2-m temperature and planetary boundary layer height over large areas of tropical and subtropical Northern Hemisphere land.
Unsteady Flame Embedding (UFE) Subgrid Model for Turbulent Premixed Combustion Simulations
El-Asrag, Hossam
2010-01-04
We present a formulation for an unsteady subgrid model for premixed combustion in the flamelet regime. Since chemistry occurs at the unresolvable scales, it is necessary to introduce a subgrid model that accounts for the multi-scale nature of the problem using the information available on the resolved scales. Most of the current models are based on the laminar flamelet concept, and often neglect the unsteady effects. The proposed model\\'s primary objective is to encompass many of the flame/turbulence interactions unsteady features and history effects. In addition it provides a dynamic and accurate approach for computing the subgrid flame propagation velocity. The unsteady flame embedding approach (UFE) treats the flame as an ensemble of locally one-dimensional flames. A set of elemental one dimensional flames is used to describe the turbulent flame structure at the subgrid level. The stretched flame calculations are performed on the stagnation line of a strained flame using the unsteady filtered strain rate computed from the resolved- grid. The flame iso-surface is tracked using an accurate high-order level set formulation to propagate the flame interface at the coarse resolution with minimum numerical diffusion. In this paper the solver and the model components are introduced and used to investigate two unsteady flames with different Lewis numbers in the thin reaction zone regime. The results show that the UFE model captures the unsteady flame-turbulence interactions and the flame propagation speed reasonably well. Higher propagation speed is observed for the lower than unity Lewis number flame because of the impact of differential diffusion.
Quadratic inner element subgrid scale discretisation of the Boltzmann transport equation
International Nuclear Information System (INIS)
Baker, C.M.J.; Buchan, A.G.; Pain, C.C.; Tollit, B.; Eaton, M.D.; Warner, P.
2012-01-01
This paper explores the application of the inner element subgrid scale method to the Boltzmann transport equation using quadratic basis functions. Previously, only linear basis functions for both the coarse scale and the fine scale were considered. This paper, therefore, analyses the advantages of using different coarse and subgrid basis functions for increasing the accuracy of the subgrid scale method. The transport of neutral particle radiation may be described by the Boltzmann transport equation (BTE) which, due to its 7 dimensional phase space, is computationally expensive to resolve. Multi-scale methods offer an approach to efficiently resolve the spatial dimensions of the BTE by separating the solution into its coarse and fine scales and formulating a solution whereby only the computationally efficient coarse scales need to be solved. In previous work an inner element subgrid scale method was developed that applied a linear continuous and discontinuous finite element method to represent the solution’s coarse and fine scale components. This approach was shown to generate efficient and stable solutions, and so this article continues its development by formulating higher order quadratic finite element expansions over the continuous and discontinuous scales. Here it is shown that a solution’s convergence can be improved significantly using higher order basis functions. Furthermore, by using linear finite elements to represent coarse scales in combination with quadratic fine scales, convergence can also be improved with only a modest increase in computational expense.
Directory of Open Access Journals (Sweden)
Liping Chen
2018-05-01
Full Text Available A sub-grid multiple relaxation time (MRT lattice Boltzmann model with curvilinear coordinates is applied to simulate an artificial meandering river. The method is based on the D2Q9 model and standard Smagorinsky sub-grid scale (SGS model is introduced to simulate meandering flows. The interpolation supplemented lattice Boltzmann method (ISLBM and the non-equilibrium extrapolation method are used for second-order accuracy and boundary conditions. The proposed model was validated by a meandering channel with a 180° bend and applied to a steady curved river with piers. Excellent agreement between the simulated results and previous computational and experimental data was found, showing that MRT-LBM (MRT lattice Boltzmann method coupled with a Smagorinsky sub-grid scale (SGS model in a curvilinear coordinates grid is capable of simulating practical meandering flows.
Subgrid Modeling of AGN-driven Turbulence in Galaxy Clusters
Scannapieco, Evan; Brüggen, Marcus
2008-10-01
Hot, underdense bubbles powered by active galactic nuclei (AGNs) are likely to play a key role in halting catastrophic cooling in the centers of cool-core galaxy clusters. We present three-dimensional simulations that capture the evolution of such bubbles, using an adaptive mesh hydrodynamic code, FLASH3, to which we have added a subgrid model of turbulence and mixing. While pure hydro simulations indicate that AGN bubbles are disrupted into resolution-dependent pockets of underdense gas, proper modeling of subgrid turbulence indicates that this is a poor approximation to a turbulent cascade that continues far beyond the resolution limit. Instead, Rayleigh-Taylor instabilities act to effectively mix the heated region with its surroundings, while at the same time preserving it as a coherent structure, consistent with observations. Thus, bubbles are transformed into hot clouds of mixed material as they move outward in the hydrostatic intracluster medium (ICM), much as large airbursts lead to a distinctive "mushroom cloud" structure as they rise in the hydrostatic atmosphere of Earth. Properly capturing the evolution of such clouds has important implications for many ICM properties. In particular, it significantly changes the impact of AGNs on the distribution of entropy and metals in cool-core clusters such as Perseus.
Huang, X.; Allen, D. J.; Herwehe, J. A.; Alapaty, K. V.; Loughner, C.; Pickering, K. E.
2014-12-01
Subgrid-scale cloudiness directly influences global and regional atmospheric radiation budgets by attenuating shortwave radiation, leading to suppressed convection, decreased surface precipitation as well as other meteorological parameter changes. We use the latest version of WRF (v3.6, Apr 2014), which incorporates the Kain-Fritsch (KF) convective parameterization to provide subgrid-scale cloud fraction and condensate feedback to the rapid radiative transfer model-global (RRTMG) shortwave and longwave radiation schemes. We apply the KF scheme to simulate the DISCOVER-AQ Maryland field campaign (July 2011), and compare the sensitivity of meteorological parameters to the control run that does not include subgrid cloudiness. Furthermore, we will examine the chemical impact from subgrid cloudiness using a regional chemical transport model (CMAQ). There are several meteorological parameters influenced by subgrid cumulus clouds that are very important to air quality modeling, including changes in surface temperature that impact biogenic emission rates; changes in PBL depth that affect pollutant concentrations; and changes in surface humidity levels that impact peroxide-related reactions. Additionally, subgrid cumulus clouds directly impact air pollutant concentrations by modulating photochemistry and vertical mixing. Finally, we will compare with DISCOVER-AQ flight observation data and evaluate how well this off-line CMAQ simulation driven by WRF with the KF scheme simulates the effects of regional convection on atmospheric composition.
Pau, G. S. H.; Bisht, G.; Riley, W. J.
2014-09-01
Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO2, CH4) exchanges with the atmosphere range from the molecular scale (pore-scale O2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" that reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface-subsurface isothermal simulations were performed for summer months (June-September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998-2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 103) with very small relative approximation error (training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with
International Nuclear Information System (INIS)
Chock, D.P.; Winkler, S.L.; Pu Sun
2002-01-01
We have introduced a new and elaborate approach to understand the impact of grid resolution and subgrid chemistry assumption on the grid-model prediction of species concentrations for a system with highly non-homogeneous chemistry - a reactive buoyant plume immediately downwind of the stack in a convective boundary layer. The Parcel-Grid approach plume was used to describe both the air parcel turbulent transport and chemistry. This approach allows an identical transport process for all simulations. It also allows a description of subgrid chemistry. The ambient and plume parcel transport follows the description of Luhar and Britter (Atmos. Environ, 23 (1989) 1911, 26A (1992) 1283). The chemistry follows that of the Carbon-Bond mechanism. Three different grid sizes were considered: fine, medium and coarse, together with three different subgrid chemistry assumptions: micro-scale or individual parcel, tagged-parcel (plume and ambient parcels treated separately), and untagged-parcel (plume and ambient parcels treated indiscriminately). Reducing the subgrid information is not necessarily similar to increasing the model grid size. In our example, increasing the grid size leads to a reduction in the suppression of ozone in the presence of a high-NO x stack plume, and a reduction in the effectiveness of the NO x -inhibition effect. On the other hand, reducing the subgrid information (by using the untagged-parcel assumption) leads to an increase in ozone reduction and an enhancement of the NO x -inhibition effect insofar as the ozone extremum is concerned. (author)
Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Schneider, Tapio; Teixeira, João.
2018-03-01
Large-scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid-scale turbulence and convection—such as that they adjust instantaneously to changes in resolved-scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary-layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large-scale models. Here we lay the theoretical foundations for an extended eddy-diffusivity mass-flux (EDMF) scheme that has explicit time-dependence and memory of subgrid-scale variables and is designed to represent all subgrid-scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross-sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large-scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time-dependent life cycle.
A moving subgrid model for simulation of reflood heat transfer
International Nuclear Information System (INIS)
Frepoli, Cesare; Mahaffy, John H.; Hochreiter, Lawrence E.
2003-01-01
In the quench front and froth region the thermal-hydraulic parameters experience a sharp axial variation. The heat transfer regime changes from single-phase liquid, to nucleate boiling, to transition boiling and finally to film boiling in a small axial distance. One of the major limitations of all the current best-estimate codes is that a relatively coarse mesh is used to solve the complex fluid flow and heat transfer problem in proximity of the quench front during reflood. The use of a fine axial mesh for the entire core becomes prohibitive because of the large computational costs involved. Moreover, as the mesh size decreases, the standard numerical methods based on a semi-implicit scheme, tend to become unstable. A subgrid model was developed to resolve the complex thermal-hydraulic problem at the quench front and froth region. This model is a Fine Hydraulic Moving Grid (FHMG) that overlies a coarse Eulerian mesh in the proximity of the quench front and froth region. The fine mesh moves in the core and follows the quench front as it advances in the core while the rods cool and quench. The FHMG software package was developed and implemented into the COBRA-TF computer code. This paper presents the model and discusses preliminary results obtained with the COBRA-TF/FHMG computer code
Energy Technology Data Exchange (ETDEWEB)
Larson, Vincent [Univ. of Wisconsin, Milwaukee, WI (United States)
2016-11-25
The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. The chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.
Energy Technology Data Exchange (ETDEWEB)
Randall, David A. [Colorado State Univ., Fort Collins, CO (United States). Dept. of Atmospheric Science
2015-11-01
We proposed to implement, test, and evaluate recently developed turbulence parameterizations, using a wide variety of methods and modeling frameworks together with observations including ARM data. We have successfully tested three different turbulence parameterizations in versions of the Community Atmosphere Model: CLUBB, SHOC, and IPHOC. All three produce significant improvements in the simulated climate. CLUBB will be used in CAM6, and also in ACME. SHOC is being tested in the NCEP forecast model. In addition, we have achieved a better understanding of the strengths and limitations of the PDF-based parameterizations of turbulence and convection.
A priori study of subgrid-scale features in turbulent Rayleigh-Bénard convection
Dabbagh, F.; Trias, F. X.; Gorobets, A.; Oliva, A.
2017-10-01
At the crossroad between flow topology analysis and turbulence modeling, a priori studies are a reliable tool to understand the underlying physics of the subgrid-scale (SGS) motions in turbulent flows. In this paper, properties of the SGS features in the framework of a large-eddy simulation are studied for a turbulent Rayleigh-Bénard convection (RBC). To do so, data from direct numerical simulation (DNS) of a turbulent air-filled RBC in a rectangular cavity of aspect ratio unity and π spanwise open-ended distance are used at two Rayleigh numbers R a ∈{1 08,1 010 } [Dabbagh et al., "On the evolution of flow topology in turbulent Rayleigh-Bénard convection," Phys. Fluids 28, 115105 (2016)]. First, DNS at Ra = 108 is used to assess the performance of eddy-viscosity models such as QR, Wall-Adapting Local Eddy-viscosity (WALE), and the recent S3PQR-models proposed by Trias et al. ["Building proper invariants for eddy-viscosity subgrid-scale models," Phys. Fluids 27, 065103 (2015)]. The outcomes imply that the eddy-viscosity modeling smoothes the coarse-grained viscous straining and retrieves fairly well the effect of the kinetic unfiltered scales in order to reproduce the coherent large scales. However, these models fail to approach the exact evolution of the SGS heat flux and are incapable to reproduce well the further dominant rotational enstrophy pertaining to the buoyant production. Afterwards, the key ingredients of eddy-viscosity, νt, and eddy-diffusivity, κt, are calculated a priori and revealed positive prevalent values to maintain a turbulent wind essentially driven by the mean buoyant force at the sidewalls. The topological analysis suggests that the effective turbulent diffusion paradigm and the hypothesis of a constant turbulent Prandtl number are only applicable in the large-scale strain-dominated areas in the bulk. It is shown that the bulk-dominated rotational structures of vortex-stretching (and its synchronous viscous dissipative structures) hold
Sub-Grid Modeling of Electrokinetic Effects in Micro Flows
Chen, C. P.
2005-01-01
Advances in micro-fabrication processes have generated tremendous interests in miniaturizing chemical and biomedical analyses into integrated microsystems (Lab-on-Chip devices). To successfully design and operate the micro fluidics system, it is essential to understand the fundamental fluid flow phenomena when channel sizes are shrink to micron or even nano dimensions. One important phenomenon is the electro kinetic effect in micro/nano channels due to the existence of the electrical double layer (EDL) near a solid-liquid interface. Not only EDL is responsible for electro-osmosis pumping when an electric field parallel to the surface is imposed, EDL also causes extra flow resistance (the electro-viscous effect) and flow anomaly (such as early transition from laminar to turbulent flow) observed in pressure-driven microchannel flows. Modeling and simulation of electro-kinetic effects on micro flows poses significant numerical challenge due to the fact that the sizes of the double layer (10 nm up to microns) are very thin compared to channel width (can be up to 100 s of m). Since the typical thickness of the double layer is extremely small compared to the channel width, it would be computationally very costly to capture the velocity profile inside the double layer by placing sufficient number of grid cells in the layer to resolve the velocity changes, especially in complex, 3-d geometries. Existing approaches using "slip" wall velocity and augmented double layer are difficult to use when the flow geometry is complicated, e.g. flow in a T-junction, X-junction, etc. In order to overcome the difficulties arising from those two approaches, we have developed a sub-grid integration method to properly account for the physics of the double layer. The integration approach can be used on simple or complicated flow geometries. Resolution of the double layer is not needed in this approach, and the effects of the double layer can be accounted for at the same time. With this
Enhancing the representation of subgrid land surface characteristics in land surface models
Directory of Open Access Journals (Sweden)
Y. Ke
2013-09-01
Full Text Available Land surface heterogeneity has long been recognized as important to represent in the land surface models. In most existing land surface models, the spatial variability of surface cover is represented as subgrid composition of multiple surface cover types, although subgrid topography also has major controls on surface processes. In this study, we developed a new subgrid classification method (SGC that accounts for variability of both topography and vegetation cover. Each model grid cell was represented with a variable number of elevation classes and each elevation class was further described by a variable number of vegetation types optimized for each model grid given a predetermined total number of land response units (LRUs. The subgrid structure of the Community Land Model (CLM was used to illustrate the newly developed method in this study. Although the new method increases the computational burden in the model simulation compared to the CLM subgrid vegetation representation, it greatly reduced the variations of elevation within each subgrid class and is able to explain at least 80% of the total subgrid plant functional types (PFTs. The new method was also evaluated against two other subgrid methods (SGC1 and SGC2 that assigned fixed numbers of elevation and vegetation classes for each model grid (SGC1: M elevation bands–N PFTs method; SGC2: N PFTs–M elevation bands method. Implemented at five model resolutions (0.1°, 0.25°, 0.5°, 1.0°and 2.0° with three maximum-allowed total number of LRUs (i.e., NLRU of 24, 18 and 12 over North America (NA, the new method yielded more computationally efficient subgrid representation compared to SGC1 and SGC2, particularly at coarser model resolutions and moderate computational intensity (NLRU = 18. It also explained the most PFTs and elevation variability that is more homogeneously distributed spatially. The SGC method will be implemented in CLM over the NA continent to assess its impacts on
On the TFNS Subgrid Models for Liquid-Fueled Turbulent Combustion
Liu, Nan-Suey; Wey, Thomas
2014-01-01
This paper describes the time-filtered Navier-Stokes (TFNS) approach capable of capturing unsteady flow structures important for turbulent mixing in the combustion chamber and two different subgrid models used to emulate the major processes occurring in the turbulence-chemistry interaction. These two subgrid models are termed as LEM-like model and EUPDF-like model (Eulerian probability density function), respectively. Two-phase turbulent combustion in a single-element lean-direct-injection (LDI) combustor is calculated by employing the TFNS/LEM-like approach as well as the TFNS/EUPDF-like approach. Results obtained from the TFNS approach employing these two different subgrid models are compared with each other, along with the experimental data, followed by more detailed comparison between the results of an updated calculation using the TFNS/LEM-like model and the experimental data.
Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca
2018-06-01
We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.
Subin, Z M; Milly, Paul C.D.; Sulman, B N; Malyshev, Sergey; Shevliakova, E
2014-01-01
Soil moisture is a crucial control on surface water and energy fluxes, vegetation, and soil carbon cycling. Earth-system models (ESMs) generally represent an areal-average soil-moisture state in gridcells at scales of 50–200 km and as a result are not able to capture the nonlinear effects of topographically-controlled subgrid heterogeneity in soil moisture, in particular where wetlands are present. We addressed this deficiency by building a subgrid representation of hillslope-scale topographic gradients, TiHy (Tiled-hillslope Hydrology), into the Geophysical Fluid Dynamics Laboratory (GFDL) land model (LM3). LM3-TiHy models one or more representative hillslope geometries for each gridcell by discretizing them into land model tiles hydrologically coupled along an upland-to-lowland gradient. Each tile has its own surface fluxes, vegetation, and vertically-resolved state variables for soil physics and biogeochemistry. LM3-TiHy simulates a gradient in soil moisture and water-table depth between uplands and lowlands in each gridcell. Three hillslope hydrological regimes appear in non-permafrost regions in the model: wet and poorly-drained, wet and well-drained, and dry; with large, small, and zero wetland area predicted, respectively. Compared to the untiled LM3 in stand-alone experiments, LM3-TiHy simulates similar surface energy and water fluxes in the gridcell-mean. However, in marginally wet regions around the globe, LM3-TiHy simulates shallow groundwater in lowlands, leading to higher evapotranspiration, lower surface temperature, and higher leaf area compared to uplands in the same gridcells. Moreover, more than four-fold larger soil carbon concentrations are simulated globally in lowlands as compared with uplands. We compared water-table depths to those simulated by a recent global model-observational synthesis, and we compared wetland and inundated areas diagnosed from the model to observational datasets. The comparisons demonstrate that LM3-TiHy has the
Evans, John; Coley, Christopher; Aronson, Ryan; Nelson, Corey
2017-11-01
In this talk, a large eddy simulation methodology for turbulent incompressible flow will be presented which combines the best features of divergence-conforming discretizations and the residual-based variational multiscale approach to large eddy simulation. In this method, the resolved motion is represented using a divergence-conforming discretization, that is, a discretization that preserves the incompressibility constraint in a pointwise manner, and the unresolved fluid motion is explicitly modeled by subgrid vortices that lie within individual grid cells. The evolution of the subgrid vortices is governed by dynamical model equations driven by the residual of the resolved motion. Consequently, the subgrid vortices appropriately vanish for laminar flow and fully resolved turbulent flow. As the resolved velocity field and subgrid vortices are both divergence-free, the methodology conserves mass in a pointwise sense and admits discrete balance laws for energy, enstrophy, and helicity. Numerical results demonstrate the methodology yields improved results versus state-of-the-art eddy viscosity models in the context of transitional, wall-bounded, and rotational flow when a divergence-conforming B-spline discretization is utilized to represent the resolved motion.
QUANTIFYING SUBGRID POLLUTANT VARIABILITY IN EULERIAN AIR QUALITY MODELS
In order to properly assess human risk due to exposure to hazardous air pollutants or air toxics, detailed information is needed on the location and magnitude of ambient air toxic concentrations. Regional scale Eulerian air quality models are typically limited to relatively coar...
Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model
O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.
2015-12-01
Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.
Kumar, R.; Samaniego, L. E.; Livneh, B.
2013-12-01
Knowledge of soil hydraulic properties such as porosity and saturated hydraulic conductivity is required to accurately model the dynamics of near-surface hydrological processes (e.g. evapotranspiration and root-zone soil moisture dynamics) and provide reliable estimates of regional water and energy budgets. Soil hydraulic properties are commonly derived from pedo-transfer functions using soil textural information recorded during surveys, such as the fractions of sand and clay, bulk density, and organic matter content. Typically large scale land-surface models are parameterized using a relatively coarse soil map with little or no information on parametric sub-grid variability. In this study we analyze the impact of sub-grid soil variability on simulated hydrological fluxes over the Mississippi River Basin (≈3,240,000 km2) at multiple spatio-temporal resolutions. A set of numerical experiments were conducted with the distributed mesoscale hydrologic model (mHM) using two soil datasets: (a) the Digital General Soil Map of the United States or STATSGO2 (1:250 000) and (b) the recently collated Harmonized World Soil Database based on the FAO-UNESCO Soil Map of the World (1:5 000 000). mHM was parameterized with the multi-scale regionalization technique that derives distributed soil hydraulic properties via pedo-transfer functions and regional coefficients. Within the experimental framework, the 3-hourly model simulations were conducted at four spatial resolutions ranging from 0.125° to 1°, using meteorological datasets from the NLDAS-2 project for the time period 1980-2012. Preliminary results indicate that the model was able to capture observed streamflow behavior reasonably well with both soil datasets, in the major sub-basins (i.e. the Missouri, the Upper Mississippi, the Ohio, the Red, and the Arkansas). However, the spatio-temporal patterns of simulated water fluxes and states (e.g. soil moisture, evapotranspiration) from both simulations, showed marked
Baker, I. T.; Prihodko, L.; Vivoni, E. R.; Denning, A. S.
2017-12-01
Arid and semiarid regions represent a large fraction of global land, with attendant importance of surface energy and trace gas flux to global totals. These regions are characterized by strong seasonality, especially in precipitation, that defines the level of ecosystem stress. Individual plants have been observed to respond non-linearly to increasing soil moisture stress, where plant function is generally maintained as soils dry down to a threshold at which rapid closure of stomates occurs. Incorporating this nonlinear mechanism into landscape-scale models can result in unrealistic binary "on-off" behavior that is especially problematic in arid landscapes. Subsequently, models have `relaxed' their simulation of soil moisture stress on evapotranspiration (ET). Unfortunately, these relaxations are not physically based, but are imposed upon model physics as a means to force a more realistic response. Previously, we have introduced a new method to represent soil moisture regulation of ET, whereby the landscape is partitioned into `BINS' of soil moisture wetness, each associated with a fractional area of the landscape or grid cell. A physically- and observationally-based nonlinear soil moisture stress function is applied, but when convolved with the relative area distribution represented by wetness BINS the system has the emergent property of `smoothing' the landscape-scale response without the need for non-physical impositions on model physics. In this research we confront BINS simulations of Bowen ratio, soil moisture variability and trace gas flux with soil moisture and eddy covariance observations taken at the Jornada LTER dryland site in southern New Mexico. We calculate the mean annual wetting cycle and associated variability about the mean state and evaluate model performance against this variability and time series of land surface fluxes from the highly instrumented Tromble Weir watershed. The BINS simulations capture the relatively rapid reaction to wetting
Energy Technology Data Exchange (ETDEWEB)
Vlaykov, Dimitar G., E-mail: Dimitar.Vlaykov@ds.mpg.de [Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Max-Planck-Institut für Dynamik und Selbstorganisation, Am Faßberg 17, D-37077 Göttingen (Germany); Grete, Philipp [Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Schmidt, Wolfram [Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, D-21029 Hamburg (Germany); Schleicher, Dominik R. G. [Departamento de Astronomía, Facultad Ciencias Físicas y Matemáticas, Universidad de Concepción, Av. Esteban Iturra s/n Barrio Universitario, Casilla 160-C (Chile)
2016-06-15
Compressible magnetohydrodynamic (MHD) turbulence is ubiquitous in astrophysical phenomena ranging from the intergalactic to the stellar scales. In studying them, numerical simulations are nearly inescapable, due to the large degree of nonlinearity involved. However, the dynamical ranges of these phenomena are much larger than what is computationally accessible. In large eddy simulations (LESs), the resulting limited resolution effects are addressed explicitly by introducing to the equations of motion additional terms associated with the unresolved, subgrid-scale dynamics. This renders the system unclosed. We derive a set of nonlinear structural closures for the ideal MHD LES equations with particular emphasis on the effects of compressibility. The closures are based on a gradient expansion of the finite-resolution operator [W. K. Yeo (CUP, 1993)] and require no assumptions about the nature of the flow or magnetic field. Thus, the scope of their applicability ranges from the sub- to the hyper-sonic and -Alfvénic regimes. The closures support spectral energy cascades both up and down-scale, as well as direct transfer between kinetic and magnetic resolved and unresolved energy budgets. They implicitly take into account the local geometry, and in particular, the anisotropy of the flow. Their properties are a priori validated in Paper II [P. Grete et al., Phys. Plasmas 23, 062317 (2016)] against alternative closures available in the literature with respect to a wide range of simulation data of homogeneous and isotropic turbulence.
The Storm Surge and Sub-Grid Inundation Modeling in New York City during Hurricane Sandy
Directory of Open Access Journals (Sweden)
Harry V. Wang
2014-03-01
Full Text Available Hurricane Sandy inflicted heavy damage in New York City and the New Jersey coast as the second costliest storm in history. A large-scale, unstructured grid storm tide model, Semi-implicit Eulerian Lagrangian Finite Element (SELFE, was used to hindcast water level variation during Hurricane Sandy in the mid-Atlantic portion of the U.S. East Coast. The model was forced by eight tidal constituents at the model’s open boundary, 1500 km away from the coast, and the wind and pressure fields from atmospheric model Regional Atmospheric Modeling System (RAMS provided by Weatherflow Inc. The comparisons of the modeled storm tide with the NOAA gauge stations from Montauk, NY, Long Island Sound, encompassing New York Harbor, Atlantic City, NJ, to Duck, NC, were in good agreement, with an overall root mean square error and relative error in the order of 15–20 cm and 5%–7%, respectively. Furthermore, using large-scale model outputs as the boundary conditions, a separate sub-grid model that incorporates LIDAR data for the major portion of the New York City was also set up to investigate the detailed inundation process. The model results compared favorably with USGS’ Hurricane Sandy Mapper database in terms of its timing, local inundation area, and the depth of the flooding water. The street-level inundation with water bypassing the city building was created and the maximum extent of horizontal inundation was calculated, which was within 30 m of the data-derived estimate by USGS.
Meneveau, Charles; Yang, Yunke; Perlman, Eric; Wan, Minpin; Burns, Randal; Szalay, Alex; Chen, Shiyi; Eyink, Gregory
2008-11-01
A public database system archiving a direct numerical simulation (DNS) data set of isotropic, forced turbulence is used for studying basic turbulence dynamics. The data set consists of the DNS output on 1024-cubed spatial points and 1024 time-samples spanning about one large-scale turn-over timescale. This complete space-time history of turbulence is accessible to users remotely through an interface that is based on the Web-services model (see http://turbulence.pha.jhu.edu). Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls that request desired parts of the data over the network. The architecture of the database is briefly explained, as are some of the new functions such as Lagrangian particle tracking and spatial box-filtering. These tools are used to evaluate and compare subgrid stresses and models.
Advanced subgrid modeling for Multiphase CFD in CASL VERA tools
International Nuclear Information System (INIS)
Baglietto, Emilio; Gilman, Lindsey; Sugrue, Rosie
2014-01-01
This work introduces advanced modeling capabilities that are being developed to improve the accuracy and extend the applicability of Multiphase CFD. Specifics of the advanced and hardened boiling closure model are described in this work. The development has been driven by new physical understanding, derived from the innovative experimental techniques available at MIT. A new experimental-based mechanistic approach to heat partitioning is proposed. The model introduces a new description of the bubble evaporation, sliding and interaction on the heated surface to accurately capture the evaporation occurring at the heated surface, while also tracking the local surface conditions. The model is being assembled to cover an extended application area, up to Critical Heat Flux (CHF). The accurate description of the bubble interaction, effective microlayer and dry surface area are considered to be the enabling quantities towards innovated CHF capturing methodologies. Further, improved mechanistic force-balance models for bubble departure predictions and lift-off diameter predictions are implemented in the model. Studies demonstrate the influence of the newly implemented partitioning components. Finally, the development work towards a more consistent and integrated hydrodynamic closure is presented. The main objective here is to develop a set of robust momentum closure relations which focuses on the specific application to PWR conditions, but will facilitate the application to other geometries, void fractions, and flow regimes. The innovative approach considers local flow conditions on a cell-by-cell basis to ensure robustness. Closure relations of interest initially include drag, lift, and turbulence dispersion, with near wall corrections applied for both drag and lift. (author)
Directory of Open Access Journals (Sweden)
Anning Cheng
2010-02-01
Full Text Available Seven boundary-layer cloud cases are simulated with UCLA-LES (The University of California, Los Angeles – large eddy simulation model with different horizontal and vertical gridspacing to investigate how the results depend on gridspacing. Some variables are more sensitive to horizontal gridspacing, while others are more sensitive to vertical gridspacing, and still others are sensitive to both horizontal and vertical gridspacings with similar or opposite trends. For cloud-related variables having the opposite dependence on horizontal and vertical gridspacings, changing the gridspacing proportionally in both directions gives the appearance of convergence. In this study, we mainly discuss the impact of subgrid-scale (SGS kinetic energy (KE on the simulations with coarsening of horizontal and vertical gridspacings. A running-mean operator is used to separate the KE of the high-resolution benchmark simulations into that of resolved scales of coarse-resolution simulations and that of SGSs. The diagnosed SGS KE is compared with that parameterized by the Smagorinsky-Lilly SGS scheme at various gridspacings. It is found that the parameterized SGS KE for the coarse-resolution simulations is usually underestimated but the resolved KE is unrealistically large, compared to benchmark simulations. However, the sum of resolved and SGS KEs is about the same for simulations with various gridspacings. The partitioning of SGS and resolved heat and moisture transports is consistent with that of SGS and resolved KE, which means that the parameterized transports are underestimated but resolved-scale transports are overestimated. On the whole, energy shifts to large-scales as the horizontal gridspacing becomes coarse, hence the size of clouds and the resolved circulation increase, the clouds become more stratiform-like with an increase in cloud fraction, cloud liquid-water path and surface precipitation; when coarse vertical gridspacing is used, cloud sizes do not
Yue, Chao; Ciais, Philippe; Li, Wei
2018-02-01
Several modelling studies reported elevated carbon emissions from historical land use change (ELUC) by including bidirectional transitions on the sub-grid scale (termed gross land use change), dominated by shifting cultivation and other land turnover processes. However, most dynamic global vegetation models (DGVMs) that have implemented gross land use change either do not account for sub-grid secondary lands, or often have only one single secondary land tile over a model grid cell and thus cannot account for various rotation lengths in shifting cultivation and associated secondary forest age dynamics. Therefore, it remains uncertain how realistic the past ELUC estimations are and how estimated ELUC will differ between the two modelling approaches with and without multiple sub-grid secondary land cohorts - in particular secondary forest cohorts. Here we investigated historical ELUC over 1501-2005 by including sub-grid forest age dynamics in a DGVM. We run two simulations, one with no secondary forests (Sageless) and the other with sub-grid secondary forests of six age classes whose demography is driven by historical land use change (Sage). Estimated global ELUC for 1501-2005 is 176 Pg C in Sage compared to 197 Pg C in Sageless. The lower ELUC values in Sage arise mainly from shifting cultivation in the tropics under an assumed constant rotation length of 15 years, being 27 Pg C in Sage in contrast to 46 Pg C in Sageless. Estimated cumulative ELUC values from wood harvest in the Sage simulation (31 Pg C) are however slightly higher than Sageless (27 Pg C) when the model is forced by reconstructed harvested areas because secondary forests targeted in Sage for harvest priority are insufficient to meet the prescribed harvest area, leading to wood harvest being dominated by old primary forests. An alternative approach to quantify wood harvest ELUC, i.e. always harvesting the close-to-mature forests in both Sageless and Sage, yields similar values of 33 Pg C by both
Directory of Open Access Journals (Sweden)
J. R. Melton
2014-02-01
Full Text Available Terrestrial ecosystem models commonly represent vegetation in terms of plant functional types (PFTs and use their vegetation attributes in calculations of the energy and water balance as well as to investigate the terrestrial carbon cycle. Sub-grid scale variability of PFTs in these models is represented using different approaches with the "composite" and "mosaic" approaches being the two end-members. The impact of these two approaches on the global carbon balance has been investigated with the Canadian Terrestrial Ecosystem Model (CTEM v 1.2 coupled to the Canadian Land Surface Scheme (CLASS v 3.6. In the composite (single-tile approach, the vegetation attributes of different PFTs present in a grid cell are aggregated and used in calculations to determine the resulting physical environmental conditions (soil moisture, soil temperature, etc. that are common to all PFTs. In the mosaic (multi-tile approach, energy and water balance calculations are performed separately for each PFT tile and each tile's physical land surface environmental conditions evolve independently. Pre-industrial equilibrium CLASS-CTEM simulations yield global totals of vegetation biomass, net primary productivity, and soil carbon that compare reasonably well with observation-based estimates and differ by less than 5% between the mosaic and composite configurations. However, on a regional scale the two approaches can differ by > 30%, especially in areas with high heterogeneity in land cover. Simulations over the historical period (1959–2005 show different responses to evolving climate and carbon dioxide concentrations from the two approaches. The cumulative global terrestrial carbon sink estimated over the 1959–2005 period (excluding land use change (LUC effects differs by around 5% between the two approaches (96.3 and 101.3 Pg, for the mosaic and composite approaches, respectively and compares well with the observation-based estimate of 82.2 ± 35 Pg C over the same
Analysis of subgrid scale mixing using a hybrid LES-Monte-Carlo PDF method
International Nuclear Information System (INIS)
Olbricht, C.; Hahn, F.; Sadiki, A.; Janicka, J.
2007-01-01
This contribution introduces a hybrid LES-Monte-Carlo method for a coupled solution of the flow and the multi-dimensional scalar joint pdf in two complex mixing devices. For this purpose an Eulerian Monte-Carlo method is used. First, a complex mixing device (jet-in-crossflow, JIC) is presented in which the stochastic convergence and the coherency between the scalar field solution obtained via finite-volume methods and that from the stochastic solution of the pdf for the hybrid method are evaluated. Results are compared to experimental data. Secondly, an extensive investigation of the micromixing on the basis of assumed shape and transported SGS-pdfs in a configuration with practical relevance is carried out. This consists of a mixing chamber with two opposite rows of jets penetrating a crossflow (multi-jet-in-crossflow, MJIC). Some numerical results are compared to available experimental data and to RANS based results. It turns out that the hybrid LES-Monte-Carlo method could achieve a detailed analysis of the mixing at the subgrid level
Energy Technology Data Exchange (ETDEWEB)
Hillman, Benjamin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marchand, Roger T. [Univ. of Washington, Seattle, WA (United States); Ackerman, Thomas P. [Univ. of Washington, Seattle, WA (United States)
2017-08-01
Satellite simulators are often used to account for limitations in satellite retrievals of cloud properties in comparisons between models and satellite observations. The purpose of the simulator framework is to enable more robust evaluation of model cloud properties, so that di erences between models and observations can more con dently be attributed to model errors. However, these simulators are subject to uncertainties themselves. A fundamental uncertainty exists in connecting the spatial scales at which cloud properties are retrieved with those at which clouds are simulated in global models. In this study, we create a series of sensitivity tests using 4 km global model output from the Multiscale Modeling Framework to evaluate the sensitivity of simulated satellite retrievals when applied to climate models whose grid spacing is many tens to hundreds of kilometers. In particular, we examine the impact of cloud and precipitation overlap and of condensate spatial variability. We find the simulated retrievals are sensitive to these assumptions. Specifically, using maximum-random overlap with homogeneous cloud and precipitation condensate, which is often used in global climate models, leads to large errors in MISR and ISCCP-simulated cloud cover and in CloudSat-simulated radar reflectivity. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.
Firl, G. J.; Randall, D. A.
2013-12-01
The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been
Decker, Jeremy D.; Hughes, J.D.
2013-01-01
Climate change and sea-level rise could cause substantial changes in urban runoff and flooding in low-lying coast landscapes. A major challenge for local government officials and decision makers is to translate the potential global effects of climate change into actionable and cost-effective adaptation and mitigation strategies at county and municipal scales. A MODFLOW process is used to represent sub-grid scale hydrology in urban settings to help address these issues. Coupled interception, surface water, depression, and unsaturated zone storage are represented. A two-dimensional diffusive wave approximation is used to represent overland flow. Three different options for representing infiltration and recharge are presented. Additional features include structure, barrier, and culvert flow between adjacent cells, specified stage boundaries, critical flow boundaries, source/sink surface-water terms, and the bi-directional runoff to MODFLOW Surface-Water Routing process. Some abilities of the Urban RunOff (URO) process are demonstrated with a synthetic problem using four land uses and varying cell coverages. Precipitation from a hypothetical storm was applied and cell by cell surface-water depth, groundwater level, infiltration rate, and groundwater recharge rate are shown. Results indicate the URO process has the ability to produce time-varying, water-content dependent infiltration and leakage, and successfully interacts with MODFLOW.
Energy Technology Data Exchange (ETDEWEB)
Buschman, Francis X., E-mail: Francis.Buschman@unnpp.gov; Aumiller, David L.
2017-02-15
Highlights: • Direct contact condensation data on liquid jets up to 1.7 MPa in pure steam and in the presence of noncondensable gas. • Identified a pressure effect on the impact of noncondensables to suppress condensation heat transfer not captured in existing data or correlations. • Pure steam data is used to develop a new correlation for condensation heat transfer on subcooled liquid jets. • Noncondensable data used to develop a modification to the renewal time estimate used in the Young and Bajorek correlation for condensation suppression in the presence of noncondensables. • A jet injection boundary condition, using a sub-grid jet condensation model, is developed for COBRA-IE which provides a more detailed estimate of the condensation rate on the liquid jet and allows the use of jet specific closure relationships. - Abstract: Condensation on liquid jets is an important phenomenon for many different facets of nuclear power plant transients and analyses such as containment spray cooling. An experimental facility constructed at the Pennsylvania State University, the High Pressure Liquid Jet Condensation Heat Transfer facility (HPLJCHT), has been used to perform steady-state condensation heat transfer experiments in which the temperature of the liquid jet is measured at different axial locations allowing the condensation rate to be determined over the jet length. Test data have been obtained in a pure steam environment and with varying concentrations of noncondensable gas. This data extends the available jet condensation data from near atmospheric pressure up to a pressure of 1.7 MPa. An empirical correlation for the liquid side condensation heat transfer coefficient has been developed based on the data obtained in pure steam. The data obtained with noncondensable gas were used to develop a correlation for the renewal time as used in the condensation suppression model developed by Young and Bajorek. This paper describes a new sub-grid liquid jet
International Nuclear Information System (INIS)
Buschman, Francis X.; Aumiller, David L.
2017-01-01
Highlights: • Direct contact condensation data on liquid jets up to 1.7 MPa in pure steam and in the presence of noncondensable gas. • Identified a pressure effect on the impact of noncondensables to suppress condensation heat transfer not captured in existing data or correlations. • Pure steam data is used to develop a new correlation for condensation heat transfer on subcooled liquid jets. • Noncondensable data used to develop a modification to the renewal time estimate used in the Young and Bajorek correlation for condensation suppression in the presence of noncondensables. • A jet injection boundary condition, using a sub-grid jet condensation model, is developed for COBRA-IE which provides a more detailed estimate of the condensation rate on the liquid jet and allows the use of jet specific closure relationships. - Abstract: Condensation on liquid jets is an important phenomenon for many different facets of nuclear power plant transients and analyses such as containment spray cooling. An experimental facility constructed at the Pennsylvania State University, the High Pressure Liquid Jet Condensation Heat Transfer facility (HPLJCHT), has been used to perform steady-state condensation heat transfer experiments in which the temperature of the liquid jet is measured at different axial locations allowing the condensation rate to be determined over the jet length. Test data have been obtained in a pure steam environment and with varying concentrations of noncondensable gas. This data extends the available jet condensation data from near atmospheric pressure up to a pressure of 1.7 MPa. An empirical correlation for the liquid side condensation heat transfer coefficient has been developed based on the data obtained in pure steam. The data obtained with noncondensable gas were used to develop a correlation for the renewal time as used in the condensation suppression model developed by Young and Bajorek. This paper describes a new sub-grid liquid jet
From Detailed Description of Chemical Reacting Carbon Particles to Subgrid Models for CFD
Directory of Open Access Journals (Sweden)
Schulze S.
2013-04-01
Full Text Available This work is devoted to the development and validation of a sub-model for the partial oxidation of a spherical char particle moving in an air/steam atmosphere. The particle diameter is 2 mm. The coal particle is represented by moisture- and ash-free nonporous carbon while the coal rank is implemented using semi-global reaction rate expressions taken from the literature. The submodel includes six gaseous chemical species (O2, CO2, CO, H2O, H2, N2. Three heterogeneous reactions are employed, along with two homogeneous semi-global reactions, namely carbon monoxide oxidation and the water-gas-shift reaction. The distinguishing feature of the subgrid model is that it takes into account the influence of homogeneous reactions on integral characteristics such as carbon combustion rates and particle temperature. The sub-model was validated by comparing its results with a comprehensive CFD-based model resolving the issues of bulk flow and boundary layer around the particle. In this model, the Navier-Stokes equations coupled with the energy and species conservation equations were used to solve the problem by means of the pseudo-steady state approach. At the surface of the particle, the balance of mass, energy and species concentration was applied including the effect of the Stefan flow and heat loss due to radiation at the surface of the particle. Good agreement was achieved between the sub-model and the CFD-based model. Additionally, the CFD-based model was verified against experimental data published in the literature (Makino et al. (2003 Combust. Flame 132, 743-753. Good agreement was achieved between numerically predicted and experimentally obtained data for input conditions corresponding to the kinetically controlled regime. The maximal discrepancy (10% between the experiments and the numerical results was observed in the diffusion-controlled regime. Finally, we discuss the influence of the Reynolds number, the ambient O2 mass fraction and the ambient
Final Report: Systematic Development of a Subgrid Scaling Framework to Improve Land Simulation
Energy Technology Data Exchange (ETDEWEB)
Dickinson, Robert Earl [Univ. of Texas, Austin, TX (United States)
2016-07-11
We carried out research to development improvements of the land component of climate models and to understand the role of land in climate variability and change. A highlight was the development of a 3D canopy radiation model. More than a dozen publications resulted.
A practical approach to compute short-wave irradiance interacting with subgrid-scale buildings
Energy Technology Data Exchange (ETDEWEB)
Sievers, Uwe; Frueh, Barbara [Deutscher Wetterdienst, Offenbach am Main (Germany)
2012-08-15
A numerical approach for the calculation of short-wave irradiances at the ground as well as the walls and roofs of buildings in an environment with unresolved built-up is presented. In this radiative parameterization scheme the properties of the unresolved built-up are assigned to settlement types which are characterized by mean values of the volume density of the buildings and their wall area density. Therefore it is named wall area approach. In the vertical direction the range of building heights may be subdivided into several layers. In the case of non-uniform building heights the shadowing of the lower roofs by the taller buildings is taken into account. The method includes the approximate calculation of sky view and sun view factors. For an idealized building arrangement it is shown that the obtained approximate factors are in good agreement with exact calculations just as for the comparison of the calculated and measured effective albedo values. For arrangements with isolated single buildings the presented wall area approach yields a better agreement with the observations than similar methods where the unresolved built-up is characterized by the aspect ratio of a representative street canyon (aspect ratio approach). In the limiting case where the built-up is well represented by an ensemble of idealized street canyons both approaches become equivalent. The presented short-wave radiation scheme is part of the microscale atmospheric model MUKLIMO 3 where it contributes to the calculation of surface temperatures on the basis of energy-flux equilibrium conditions. (orig.)
Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi
2015-01-01
three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.
Convection systems and associated cloudiness directly influence regional and local radiation budgets, and dynamics and thermodynamics through feedbacks. However, most subgrid-scale convective parameterizations in regional weather and climate models do not consider cumulus cloud ...
Renormalization-group theory for the eddy viscosity in subgrid modeling
Zhou, YE; Vahala, George; Hossain, Murshed
1988-01-01
Renormalization-group theory is applied to incompressible three-dimensional Navier-Stokes turbulence so as to eliminate unresolvable small scales. The renormalized Navier-Stokes equation now includes a triple nonlinearity with the eddy viscosity exhibiting a mild cusp behavior, in qualitative agreement with the test-field model results of Kraichnan. For the cusp behavior to arise, not only is the triple nonlinearity necessary but the effects of pressure must be incorporated in the triple term. The renormalized eddy viscosity will not exhibit a cusp behavior if it is assumed that a spectral gap exists between the large and small scales.
A subgrid parameterization scheme for precipitation
Directory of Open Access Journals (Sweden)
S. Turner
2012-04-01
Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.
Koster, Randal D.; Eagleson, Peter S.; Broecker, Wallace S.
1988-03-01
A capability is developed for monitoring tracer water movement in the three-dimensional Goddard Institute for Space Science Atmospheric General Circulation Model (GCM). A typical experiment with the tracer water model follows water evaporating from selected grid squares and determines where this water first returns to the Earth's surface as precipitation or condensate, thereby providing information on the lateral scales of hydrological transport in the GCM. Through a comparison of model results with observations in nature, inferences can be drawn concerning real world water transport. Tests of the tracer water model include a comparison of simulated and observed vertically-integrated vapor flux fields and simulations of atomic tritium transport from the stratosphere to the oceans. The inter-annual variability of the tracer water model results is also examined.
Koster, Randal D.; Eagleson, Peter S.; Broecker, Wallace S.
1988-01-01
A capability is developed for monitoring tracer water movement in the three-dimensional Goddard Institute for Space Science Atmospheric General Circulation Model (GCM). A typical experiment with the tracer water model follows water evaporating from selected grid squares and determines where this water first returns to the Earth's surface as precipitation or condensate, thereby providing information on the lateral scales of hydrological transport in the GCM. Through a comparison of model results with observations in nature, inferences can be drawn concerning real world water transport. Tests of the tracer water model include a comparison of simulated and observed vertically-integrated vapor flux fields and simulations of atomic tritium transport from the stratosphere to the oceans. The inter-annual variability of the tracer water model results is also examined.
An investigation of the sub-grid variability of trace gases and aerosols for global climate modeling
Directory of Open Access Journals (Sweden)
Y. Qian
2010-07-01
Full Text Available One fundamental property and limitation of grid based models is their inability to identify spatial details smaller than the grid cell size. While decades of work have gone into developing sub-grid treatments for clouds and land surface processes in climate models, the quantitative understanding of sub-grid processes and variability for aerosols and their precursors is much poorer. In this study, WRF-Chem is used to simulate the trace gases and aerosols over central Mexico during the 2006 MILAGRO field campaign, with multiple spatial resolutions and emission/terrain scenarios. Our analysis focuses on quantifying the sub-grid variability (SGV of trace gases and aerosols within a typical global climate model grid cell, i.e. 75×75 km^{2}.
Our results suggest that a simulation with 3-km horizontal grid spacing adequately reproduces the overall transport and mixing of trace gases and aerosols downwind of Mexico City, while 75-km horizontal grid spacing is insufficient to represent local emission and terrain-induced flows along the mountain ridge, subsequently affecting the transport and mixing of plumes from nearby sources. Therefore, the coarse model grid cell average may not correctly represent aerosol properties measured over polluted areas. Probability density functions (PDFs for trace gases and aerosols show that secondary trace gases and aerosols, such as O_{3}, sulfate, ammonium, and nitrate, are more likely to have a relatively uniform probability distribution (i.e. smaller SGV over a narrow range of concentration values. Mostly inert and long-lived trace gases and aerosols, such as CO and BC, are more likely to have broad and skewed distributions (i.e. larger SGV over polluted regions. Over remote areas, all trace gases and aerosols are more uniformly distributed compared to polluted areas. Both CO and O_{3} SGV vertical profiles are nearly constant within the PBL during daytime, indicating that trace gases
Energy Technology Data Exchange (ETDEWEB)
Shen, Jinmei; Arritt, R.W. [Iowa State Univ., Ames, IA (United States)
1996-12-31
The importance of land-atmosphere interactions and biosphere in climate change studies has long been recognized, and several land-atmosphere interaction schemes have been developed. Among these, the Simple Biosphere scheme (SiB) of Sellers et al. and the Biosphere Atmosphere Transfer Scheme (BATS) of Dickinson et al. are two of the most widely known. The effects of GCM subgrid-scale inhomogeneities of surface properties in general circulation models also has received increasing attention in recent years. However, due to the complexity of land surface processes and the difficulty to prescribe the large number of parameters that determine atmospheric and soil interactions with vegetation, many previous studies and results seem to be contradictory. A GCM grid element typically represents an area of 10{sup 4}-10{sup 6} km{sup 2}. Within such an area, there exist variations of soil type, soil wetness, vegetation type, vegetation density and topography, as well as urban areas and water bodies. In this paper, we incorporate both BATS and SiB2 land surface process schemes into a nonhydrostatic, compressible version of AMBLE model (Atmospheric Model -- Boundary-Layer Emphasis), and compare the surface heat fluxes and mesoscale circulations calculated using the two schemes. 8 refs., 5 figs.
M. M. Clark; T. H. Fletcher; R. R. Linn
2010-01-01
The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixtureâ fraction model relying on thermodynamic...
A new subgrid characteristic length for turbulence simulations on anisotropic grids
Trias, F. X.; Gorobets, A.; Silvis, M. H.; Verstappen, R. W. C. P.; Oliva, A.
2017-11-01
Direct numerical simulations of the incompressible Navier-Stokes equations are not feasible yet for most practical turbulent flows. Therefore, dynamically less complex mathematical formulations are necessary for coarse-grained simulations. In this regard, eddy-viscosity models for Large-Eddy Simulation (LES) are probably the most popular example thereof. This type of models requires the calculation of a subgrid characteristic length which is usually associated with the local grid size. For isotropic grids, this is equal to the mesh step. However, for anisotropic or unstructured grids, such as the pancake-like meshes that are often used to resolve near-wall turbulence or shear layers, a consensus on defining the subgrid characteristic length has not been reached yet despite the fact that it can strongly affect the performance of LES models. In this context, a new definition of the subgrid characteristic length is presented in this work. This flow-dependent length scale is based on the turbulent, or subgrid stress, tensor and its representations on different grids. The simplicity and mathematical properties suggest that it can be a robust definition that minimizes the effects of mesh anisotropies on simulation results. The performance of the proposed subgrid characteristic length is successfully tested for decaying isotropic turbulence and a turbulent channel flow using artificially refined grids. Finally, a simple extension of the method for unstructured meshes is proposed and tested for a turbulent flow around a square cylinder. Comparisons with existing subgrid characteristic length scales show that the proposed definition is much more robust with respect to mesh anisotropies and has a great potential to be used in complex geometries where highly skewed (unstructured) meshes are present.
Directory of Open Access Journals (Sweden)
J. Tonttila
2013-08-01
Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.
Birefringent dispersive FDTD subgridding scheme
De Deckere, B; Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries
2016-01-01
A novel 2D finite difference time domain (FDTD) subgridding method is proposed, only subject to the Courant limit of the coarse grid. By making mu or epsilon inside the subgrid dispersive, unconditional stability is induced at the cost of a sparse, implicit set of update equations. By only adding dispersion along preferential directions, it is possible to dramatically reduce the rank of the matrix equation that needs to be solved.
Discontinuous Galerkin Subgrid Finite Element Method for Heterogeneous Brinkman’s Equations
Iliev, Oleg P.
2010-01-01
We present a two-scale finite element method for solving Brinkman\\'s equations with piece-wise constant coefficients. This system of equations model fluid flows in highly porous, heterogeneous media with complex topology of the heterogeneities. We make use of the recently proposed discontinuous Galerkin FEM for Stokes equations by Wang and Ye in [12] and the concept of subgrid approximation developed for Darcy\\'s equations by Arbogast in [4]. In order to reduce the error along the coarse-grid interfaces we have added a alternating Schwarz iteration using patches around the coarse-grid boundaries. We have implemented the subgrid method using Deal.II FEM library, [7], and we present the computational results for a number of model problems. © 2010 Springer-Verlag Berlin Heidelberg.
Vreman, A.W.; Oijen, van J.A.; Goey, de L.P.H.; Bastiaans, R.J.M.
2009-01-01
Large-eddy simulation (LES) of turbulent combustion with premixed flamelets is investigated in this paper. The approach solves the filtered Navier-Stokes equations supplemented with two transport equations, one for the mixture fraction and another for a progress variable. The LES premixed flamelet
International Nuclear Information System (INIS)
Laval, Jean Philippe
1999-01-01
We developed a turbulent model based on asymptotic development of the Navier-Stokes equations within the hypothesis of non-local interactions at small scales. This model provides expressions of the turbulent Reynolds sub-grid stresses via estimates of the sub-grid velocities rather than velocities correlations as is usually done. The model involves the coupling of two dynamical equations: one for the resolved scales of motions, which depends upon the Reynolds stresses generated by the sub-grid motions, and one for the sub-grid scales of motions, which can be used to compute the sub-grid Reynolds stresses. The non-locality of interaction at sub-grid scales allows to model their evolution with a linear inhomogeneous equation where the forcing occurs via the energy cascade from resolved to sub-grid scales. This model was solved using a decomposition of sub-grid scales on Gabor's modes and implemented numerically in 2D with periodic boundary conditions. A particles method (PIC) was used to compute the sub-grid scales. The results were compared with results of direct simulations for several typical flows. The model was also applied to plane parallel flows. An analytical study of the equations allows a description of mean velocity profiles in agreement with experimental results and theoretical results based on the symmetries of the Navier-Stokes equation. Possible applications and improvements of the model are discussed in the conclusion. (author) [fr
Maher, G.D.; Hulshoff, S.J.
2014-01-01
The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain
Bellan, J.; Okongo, N.
2000-01-01
A study of emerging turbulent scales entropy production is conducted for a supercritical shear layer as a precursor to the eventual modeling of Subgrid Scales (from a turbulent state) leading to Large Eddy Simulations.
Yang, Fanglin; Schlesinger, Michael E.; Andranova, Natasha; Zubov, Vladimir A.; Rozanov, Eugene V.; Callis, Lin B.
2003-01-01
The sensitivity of the middle atmospheric temperature and circulation to the treatment of mean- flow forcing due to breaking gravity waves was investigated using the University of Illinois at Urbana-Champaign 40-layer Mesosphere-Stratosphere-Troposphere General Circulation Model (MST-GCM). Three GCM experiments were performed. The gravity-wave forcing was represented first by Rayleigh friction, and then by the Alexander and Dunkerton (AD) parameterization with weak and strong breaking effects of gravity waves. In all experiments, the Palmer et al. parameterization was included to treat the breaking of topographic gravity waves in the troposphere and lower stratosphere. Overall, the experiment with the strong breaking effect simulates best the middle atmospheric temperature and circulation. With Rayleigh friction and the weak breaking effect, a large warm bias of up to 60 C was found in the summer upper mesosphere and lower thermosphere. This warm bias was linked to the inability of the GCM to simulate the reversal of the zonal winds from easterly to westerly crossing the mesopause in the summer hemisphere. With the strong breaking effect, the GCM was able to simulate this reversal, and essentially eliminated the warm bias. This improvement was the result of a much stronger meridional transport circulation that possesses a strong vertical ascending branch in the summer upper mesosphere, and hence large adiabatic cooling. Budget analysis indicates that 'in the middle atmosphere the forces that act to maintain a steady zonal-mean zonal wind are primarily those associated with the meridional transport circulation and breaking gravity waves. Contributions from the interaction of the model-resolved eddies with the mean flow are small. To obtain a transport circulation in the mesosphere of the UIUC MST-GCM that is strong enough to produce the observed cold summer mesopause, gravity-wave forcing larger than 100 m/s/day in magnitude is required near the summer mesopause. In
Collaborative Research: Lagrangian Modeling of Dispersion in the Planetary Boundary Layer
National Research Council Canada - National Science Library
Weil, Jeffrey
2003-01-01
...), using Lagrangian "particle" models coupled with large-eddy simulation (LES) fields. A one-particle model for the mean concentration field was enhanced by a theoretically improved treatment of the LES subgrid-scale (SGS) velocities...
Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.
2017-12-01
Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.
Energy Technology Data Exchange (ETDEWEB)
Song, Hua [Joint Center for Earth Systems Technology, University of Maryland, Baltimore County, Baltimore, Maryland; Zhang, Zhibo [Joint Center for Earth Systems Technology, and Physics Department, University of Maryland, Baltimore County, Baltimore, Maryland; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Wang, Minghuai [Institute for Climate and Global Change Research, and School of Atmospheric Sciences, Nanjing University, Nanjing, China
2018-03-01
This paper presents a two-step evaluation of the marine boundary layer (MBL) cloud properties from two Community Atmospheric Model (version 5.3, CAM5) simulations, one based on the CAM5 standard parameterization schemes (CAM5-Base), and the other on the Cloud Layers Unified By Binormals (CLUBB) scheme (CAM5-CLUBB). In the first step, we compare the cloud properties directly from model outputs between the two simulations. We find that the CAM5-CLUBB run produces more MBL clouds in the tropical and subtropical large-scale descending regions. Moreover, the stratocumulus (Sc) to cumulus (Cu) cloud regime transition is much smoother in CAM5-CLUBB than in CAM5-Base. In addition, in CAM5-Base we find some grid cells with very small low cloud fraction (<20%) to have very high in-cloud water content (mixing ratio up to 400mg/kg). We find no such grid cells in the CAM5-CLUBB run. However, we also note that both simulations, especially CAM5-CLUBB, produce a significant amount of “empty” low cloud cells with significant cloud fraction (up to 70%) and near-zero in-cloud water content. In the second step, we use satellite observations from CERES, MODIS and CloudSat to evaluate the simulated MBL cloud properties by employing the COSP satellite simulators. We note that a feature of the COSP-MODIS simulator to mimic the minimum detection threshold of MODIS cloud masking removes much more low clouds from CAM5-CLUBB than it does from CAM5-Base. This leads to a surprising result — in the large-scale descending regions CAM5-CLUBB has a smaller COSP-MODIS cloud fraction and weaker shortwave cloud radiative forcing than CAM5-Base. A sensitivity study suggests that this is because CAM5-CLUBB suffers more from the above-mentioned “empty” clouds issue than CAM5-Base. The COSP-MODIS cloud droplet effective radius in CAM5-CLUBB shows a spatial increase from coastal St toward Cu, which is in qualitative agreement with MODIS observations. In contrast, COSP-MODIS cloud droplet
Directory of Open Access Journals (Sweden)
Christian Beer
2016-08-01
Full Text Available There are massive carbon stocks stored in permafrost-affected soils due to the 3-D soil movement process called cryoturbation. For a reliable projection of the past, recent and future Arctic carbon balance, and hence climate, a reliable concept for representing cryoturbation in a land surface model (LSM is required. The basis of the underlying transport processes is pedon-scale heterogeneity of soil hydrological and thermal properties as well as insulating layers, such as snow and vegetation. Today we still lack a concept of how to reliably represent pedon-scale properties and processes in a LSM. One possibility could be a statistical approach. This perspective paper demonstrates the importance of sub-grid heterogeneity in permafrost soils as a pre-requisite to implement any lateral transport parametrization. Representing such heterogeneity at the sub-pixel size of a LSM is the next logical step of model advancements. As a result of a theoretical experiment, heterogeneity of thermal and hydrological soil properties alone lead to a remarkable initial sub-grid range of subsoil temperature of 2 deg C, and active-layer thickness of 150 cm in East Siberia. These results show the way forward in representing combined lateral and vertical transport of water and soil in LSMs.
Optimal 25-Point Finite-Difference Subgridding Techniques for the 2D Helmholtz Equation
Directory of Open Access Journals (Sweden)
Tingting Wu
2016-01-01
Full Text Available We present an optimal 25-point finite-difference subgridding scheme for solving the 2D Helmholtz equation with perfectly matched layer (PML. This scheme is second order in accuracy and pointwise consistent with the equation. Subgrids are used to discretize the computational domain, including the interior domain and the PML. For the transitional node in the interior domain, the finite difference equation is formulated with ghost nodes, and its weight parameters are chosen by a refined choice strategy based on minimizing the numerical dispersion. Numerical experiments are given to illustrate that the newly proposed schemes can produce highly accurate seismic modeling results with enhanced efficiency.
International Nuclear Information System (INIS)
Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.
1989-01-01
Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs
Small scale models equal large scale savings
International Nuclear Information System (INIS)
Lee, R.; Segroves, R.
1994-01-01
A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)
Workshop on Human Activity at Scale in Earth System Models
Energy Technology Data Exchange (ETDEWEB)
Allen, Melissa R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Coletti, Mark A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kennedy, Joseph H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nair, Sujithkumar S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Omitaomu, Olufemi A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-01-26
Changing human activity within a geographical location may have significant influence on the global climate, but that activity must be parameterized in such a way as to allow these high-resolution sub-grid processes to affect global climate within that modeling framework. Additionally, we must have tools that provide decision support and inform local and regional policies regarding mitigation of and adaptation to climate change. The development of next-generation earth system models, that can produce actionable results with minimum uncertainties, depends on understanding global climate change and human activity interactions at policy implementation scales. Unfortunately, at best we currently have only limited schemes for relating high-resolution sectoral emissions to real-time weather, ultimately to become part of larger regions and well-mixed atmosphere. Moreover, even our understanding of meteorological processes at these scales is imperfect. This workshop addresses these shortcomings by providing a forum for discussion of what we know about these processes, what we can model, where we have gaps in these areas and how we can rise to the challenge to fill these gaps.
International Nuclear Information System (INIS)
Rutqvist, J.
2004-01-01
This model report documents the drift scale coupled thermal-hydrological-mechanical (THM) processes model development and presents simulations of the THM behavior in fractured rock close to emplacement drifts. The modeling and analyses are used to evaluate the impact of THM processes on permeability and flow in the near-field of the emplacement drifts. The results from this report are used to assess the importance of THM processes on seepage and support in the model reports ''Seepage Model for PA Including Drift Collapse'' and ''Abstraction of Drift Seepage'', and to support arguments for exclusion of features, events, and processes (FEPs) in the analysis reports ''Features, Events, and Processes in Unsaturated Zone Flow and Transport and Features, Events, and Processes: Disruptive Events''. The total system performance assessment (TSPA) calculations do not use any output from this report. Specifically, the coupled THM process model is applied to simulate the impact of THM processes on hydrologic properties (permeability and capillary strength) and flow in the near-field rock around a heat-releasing emplacement drift. The heat generated by the decay of radioactive waste results in elevated rock temperatures for thousands of years after waste emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, resulting in water redistribution and altered flow paths. These temperatures will also cause thermal expansion of the rock, with the potential of opening or closing fractures and thus changing fracture permeability in the near-field. Understanding the THM coupled processes is important for the performance of the repository because the thermally induced permeability changes potentially effect the magnitude and spatial distribution of percolation flux in the vicinity of the drift, and hence the seepage of water into the drift. This is important because a sufficient amount of water must be available within a
Chao, Winston C.
2015-01-01
The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.
Disinformative data in large-scale hydrological modelling
Directory of Open Access Journals (Sweden)
A. Kauffeldt
2013-07-01
Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent
Energy Technology Data Exchange (ETDEWEB)
Toutant, A
2006-12-15
The complex interactions between interfaces and turbulence strongly impact the flow properties. Unfortunately, Direct Numerical Simulations (DNS) have to entail a number of degrees of freedom proportional to the third power of the Reynolds number to correctly describe the flow behaviour. This extremely hard constraint makes it impossible to use DNS for industrial applications. Our strategy consists in using and improving DNS method in order to develop the Interfaces and Sub-grid Scales concept. ISS is a two-phase equivalent to the single-phase Large Eddy Simulation (LES) concept. The challenge of ISS is to integrate the two-way coupling phenomenon into sub-grid models. Applying a space filter, we have exhibited correlations or sub-grid terms that require closures. We have shown that, in two-phase flows, the presence of a discontinuity leads to specific sub-grid terms. Comparing the maximum of the norm of the sub-grid terms with the maximum of the norm of the advection tensor, we have found that sub-grid terms related to interfacial forces and viscous effect are negligible. Consequently, in the momentum balance, only the sub-grid terms related to inertia have to be closed. Thanks to a priori tests performed on several DNS data, we demonstrate that the scale similarity hypothesis, reinterpreted near discontinuity, provides sub-grid models that take into account the two-way coupling phenomenon. These models correspond to the first step of our work. Indeed, in this step, interfaces are smooth and, interactions between interfaces and turbulence occur in a transition zone where each physical variable varies sharply but continuously. The next challenge has been to determine the jump conditions across the sharp equivalent interface corresponding to the sub-grid models of the transition zone. We have used the matched asymptotic expansion method to obtain the jump conditions. The first tests on the velocity of the sharp equivalent interface are very promising (author)
Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry
2017-07-01
Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF
International Symposia on Scale Modeling
Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori
2015-01-01
This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...
Sensitivity of the scale partition for variational multiscale large-eddy simulation of channel flow
Holmen, J.; Hughes, T.J.R.; Oberai, A.A.; Wells, G.N.
2004-01-01
The variational multiscale method has been shown to perform well for large-eddy simulation (LES) of turbulent flows. The method relies upon a partition of the resolved velocity field into large- and small-scale components. The subgrid model then acts only on the small scales of motion, unlike
Stoy, Paul C; Quaife, Tristan
2015-01-01
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.
A dynamic globalization model for large eddy simulation of complex turbulent flow
Energy Technology Data Exchange (ETDEWEB)
Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)
2005-07-01
A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.
Modelling the atmospheric dispersion of foot-and-mouth disease virus for emergency preparedness
DEFF Research Database (Denmark)
Sørensen, J.H.; Jensen, C.O.; Mikkelsen, T.
2001-01-01
A model system for simulating airborne spread of foot-and-mouth disease (FMD) is described. The system includes a virus production model and the local- and mesoscale atmospheric dispersion model RIMPUFF linked to the LINCOM local-scale Row model. LINCOM is used to calculate the sub-grid scale Row...
Scale modelling in LMFBR safety
International Nuclear Information System (INIS)
Cagliostro, D.J.; Florence, A.L.; Abrahamson, G.R.
1979-01-01
This paper reviews scale modelling techniques used in studying the structural response of LMFBR vessels to HCDA loads. The geometric, material, and dynamic similarity parameters are presented and identified using the methods of dimensional analysis. Complete similarity of the structural response requires that each similarity parameter be the same in the model as in the prototype. The paper then focuses on the methods, limitations, and problems of duplicating these parameters in scale models and mentions an experimental technique for verifying the scaling. Geometric similarity requires that all linear dimensions of the prototype be reduced in proportion to the ratio of a characteristic dimension of the model to that of the prototype. The overall size of the model depends on the structural detail required, the size of instrumentation, and the costs of machining and assemblying the model. Material similarity requires that the ratio of the density, bulk modulus, and constitutive relations for the structure and fluid be the same in the model as in the prototype. A practical choice of a material for the model is one with the same density and stress-strain relationship as the operating temperature. Ni-200 and water are good simulant materials for the 304 SS vessel and the liquid sodium coolant, respectively. Scaling of the strain rate sensitivity and fracture toughness of materials is very difficult, but may not be required if these effects do not influence the structural response of the reactor components. Dynamic similarity requires that the characteristic pressure of a simulant source equal that of the prototype HCDA for geometrically similar volume changes. The energy source is calibrated in the geometry and environment in which it will be used to assure that heat transfer between high temperature loading sources and the coolant simulant and that non-equilibrium effects in two-phase sources are accounted for. For the geometry and flow conitions of interest, the
Effect of LES models on the entrainment of a passive scalar in a turbulent planar jet
Chambel Lopes, Diogo; da Silva, Carlos; Reis, Ricardo; Raman, Venkat
2011-11-01
Direct and large-eddy simulations (DNS/LES) of turbulent planar jets are used to study the role of subgrid-scale models in the integral characteristics of the passive scalar mixing in a jet. Specifically the effect of subgrid-scale models in the jet spreading rate and centreline passive scalar decay rates are assessed and compared. The modelling of the subgrid-scale fluxes is particularly challenging in the turbulent/nonturbulent (T/NT) region that divides the two regions in the jet flow: the outer region where the flow is irrotational and the inner region where the flow is turbulent. It has been shown that important Reynolds stresses exist near the T/NT interface and that these stresses determine in part the mixing and combustion rates in jets. The subgrid scales of motion near the T/NT interface are far from equilibrium and contain an important fraction of the total kinetic energy. Model constants used in several subgrid-scale models such as the Smagorinsky and the gradient models need to be corrected near the jet edge. The procedure used to obtain the dynamic Smagorinsky constant is not able to cope with the intermittent nature of this region.
Zampieri, Matteo
2012-02-01
Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions and the ecosystem dynamics. In regional-scale climate applications land surface models (LSMs) are commonly coupled to atmospheric models to close the surface energy, mass and carbon balance. LSMs in these applications are used to resolve the momentum, heat, water and carbon vertical fluxes, accounting for the effect of vegetation, soil type and other surface parameters, while lack of adequate resolution prevents using them to resolve horizontal sub-grid processes. Specifically, LSMs resolve the large-scale runoff production associated with infiltration excess and sub-grid groundwater convergence, but they neglect the effect from loosing streams to groundwater. Through the analysis of observed data of soil moisture obtained from the Oklahoma Mesoscale Network stations and land surface temperature derived from MODIS we provide evidence that the regional scale soil moisture and surface temperature patterns are affected by the rivers. This is demonstrated on the basis of simulations from a land surface model (i.e., Community Land Model - CLM, version 3.5). We show that the model cannot reproduce the features of the observed soil moisture and temperature spatial patterns that are related to the underlying mechanism of reinfiltration of river water to groundwater. Therefore, we implement a simple parameterization of this process in CLM showing the ability to reproduce the soil moisture and surface temperature spatial variabilities that relate to the river distribution at regional scale. The CLM with this new parameterization is used to evaluate impacts of the improved representation of river-groundwater interactions on the simulated water cycle parameters and the surface energy budget at the regional scale. © 2011 Elsevier B.V.
Wang, J.; van der Hoef, Martin Anton; Kuipers, J.A.M.
2010-01-01
Two-fluid modeling of the hydrodynamics of industrial-scale gas-fluidized beds proves a long-standing challenge for both engineers and scientists. In this study, we suggest a simple method to modify currently available drag correlations to allow for the effect of unresolved sub-grid scale
Gasda, Sarah E.
2012-07-01
Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.
Global scale groundwater flow model
Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc
2013-04-01
As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.
Holographic models with anisotropic scaling
Brynjolfsson, E. J.; Danielsson, U. H.; Thorlacius, L.; Zingg, T.
2013-12-01
We consider gravity duals to d+1 dimensional quantum critical points with anisotropic scaling. The primary motivation comes from strongly correlated electron systems in condensed matter theory but the main focus of the present paper is on the gravity models in their own right. Physics at finite temperature and fixed charge density is described in terms of charged black branes. Some exact solutions are known and can be used to obtain a maximally extended spacetime geometry, which has a null curvature singularity inside a single non-degenerate horizon, but generic black brane solutions in the model can only be obtained numerically. Charged matter gives rise to black branes with hair that are dual to the superconducting phase of a holographic superconductor. Our numerical results indicate that holographic superconductors with anisotropic scaling have vanishing zero temperature entropy when the back reaction of the hair on the brane geometry is taken into account.
A scale invariance criterion for LES parametrizations
Directory of Open Access Journals (Sweden)
Urs Schaefer-Rolffs
2015-01-01
Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.
Autonomous Operation of Hybrid Microgrid With AC and DC Subgrids
DEFF Research Database (Denmark)
Chiang Loh, Poh; Li, Ding; Kang Chai, Yi
2013-01-01
sources distributed throughout the two types of subgrids, which is certainly tougher than previous efforts developed for only ac or dc microgrid. This wider scope of control has not yet been investigated, and would certainly rely on the coordinated operation of dc sources, ac sources, and interlinking...... converters. Suitable control and normalization schemes are now developed for controlling them with the overall hybrid microgrid performance already verified in simulation and experiment.......This paper investigates on power-sharing issues of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac subgrids interconnected by power electronic interfaces. The main challenge here is to manage power flows among all...
Wind Farm parametrization in the mesoscale model WRF
DEFF Research Database (Denmark)
Volker, Patrick; Badger, Jake; Hahmann, Andrea N.
2012-01-01
, but are parametrized as another sub-grid scale process. In order to appropriately capture the wind farm wake recovery and its direction, two properties are important, among others, the total energy extracted by the wind farm and its velocity deficit distribution. In the considered parametrization the individual...... the extracted force is proportional to the turbine area interfacing a grid cell. The sub-grid scale wake expansion is achieved by adding turbulence kinetic energy (proportional to the extracted power) to the flow. The validity of both wind farm parametrizations has been verified against observational data. We...... turbines produce a thrust dependent on the background velocity. For the sub-grid scale velocity deficit, the entrainment from the free atmospheric flow into the wake region, which is responsible for the expansion, is taken into account. Furthermore, since the model horizontal distance is several times...
Energy Technology Data Exchange (ETDEWEB)
Larson, Vincent E.
2015-02-21
This is a final report for a SciDAC grant supported by BER. The project implemented a novel technique for coupling small-scale dynamics and microphysics into a community climate model. The technique uses subcolumns that are sampled in Monte Carlo fashion from a distribution of subgrid variability. The resulting global simulations show several improvements over the status quo.
A multi scale model for small scale plasticity
International Nuclear Information System (INIS)
Zbib, Hussein M.
2002-01-01
Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band
Puff-on-cell model for computing pollutant transport and diffusion
International Nuclear Information System (INIS)
Sheih, C.M.
1975-01-01
Most finite-difference methods of modeling pollutant dispersion have been shown to introduce numerical pseudodiffusion, which can be much larger than the true diffusion in the fluid flow and can even generate negative values in the predicted pollutant concentrations. Two attempts to minimize the effect of pseudodiffusion are discussed with emphasis on the particle-in-cell (PIC) method of Sklarew. This paper describes a method that replaces Sklarew's numerous particles in a grid volume, and parameterizes subgrid-scale concentration with a Gaussian puff, and thus avoids the computation of the moments, as in the model of Egan and Mahoney by parameterizing subgrid-scale concentration with a Guassian puff
Scaling laws for modeling nuclear reactor systems
International Nuclear Information System (INIS)
Nahavandi, A.N.; Castellana, F.S.; Moradkhanian, E.N.
1979-01-01
Scale models are used to predict the behavior of nuclear reactor systems during normal and abnormal operation as well as under accident conditions. Three types of scaling procedures are considered: time-reducing, time-preserving volumetric, and time-preserving idealized model/prototype. The necessary relations between the model and the full-scale unit are developed for each scaling type. Based on these relationships, it is shown that scaling procedures can lead to distortion in certain areas that are discussed. It is advised that, depending on the specific unit to be scaled, a suitable procedure be chosen to minimize model-prototype distortion
On the scale similarity in large eddy simulation. A proposal of a new model
International Nuclear Information System (INIS)
Pasero, E.; Cannata, G.; Gallerano, F.
2004-01-01
Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The
Modeling and analysis of large-eddy simulations of particle-laden turbulent boundary layer flows
Rahman, Mustafa M.; Samtaney, Ravi
2017-01-01
layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney (J. Fluid Mech., 2015). This LES model is virtually parameter free and involves no active filtering of the computed
Spatial scale separation in regional climate modelling
Energy Technology Data Exchange (ETDEWEB)
Feser, F.
2005-07-01
In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter
Modeling and simulation with operator scaling
Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan
2010-01-01
Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...
Structure and modeling of turbulence
International Nuclear Information System (INIS)
Novikov, E.A.
1995-01-01
The open-quotes vortex stringsclose quotes scale l s ∼ LRe -3/10 (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scale motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES)
Modelling of rate effects at multiple scales
DEFF Research Database (Denmark)
Pedersen, R.R.; Simone, A.; Sluys, L. J.
2008-01-01
, the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...
Significant uncertainty in global scale hydrological modeling from precipitation data errors
Sperna Weiland, Frederiek C.; Vrugt, Jasper A.; van Beek, Rens (L.) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.
2015-10-01
In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we focus on large-scale hydrologic modeling and analyze the effect of parameter and rainfall data uncertainty on simulated discharge dynamics with the global hydrologic model PCR-GLOBWB. We use three rainfall data products; the CFSR reanalysis, the ERA-Interim reanalysis, and a combined ERA-40 reanalysis and CRU dataset. Parameter uncertainty is derived from Latin Hypercube Sampling (LHS) using monthly discharge data from five of the largest river systems in the world. Our results demonstrate that the default parameterization of PCR-GLOBWB, derived from global datasets, can be improved by calibrating the model against monthly discharge observations. Yet, it is difficult to find a single parameterization of PCR-GLOBWB that works well for all of the five river basins considered herein and shows consistent performance during both the calibration and evaluation period. Still there may be possibilities for regionalization based on catchment similarities. Our simulations illustrate that parameter uncertainty constitutes only a minor part of predictive uncertainty. Thus, the apparent dichotomy between simulations of global-scale hydrologic behavior and actual data cannot be resolved by simply increasing the model complexity of PCR-GLOBWB and resolving sub-grid processes. Instead, it would be more productive to improve the characterization of global rainfall amounts at spatial resolutions of 0.5° and smaller.
Medvigy, David; Moorcroft, Paul R
2012-01-19
Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.
Large Scale Computations in Air Pollution Modelling
DEFF Research Database (Denmark)
Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
One-scale supersymmetric inflationary models
International Nuclear Information System (INIS)
Bertolami, O.; Ross, G.G.
1986-01-01
The reheating phase is studied in a class of supergravity inflationary models involving a two-component hidden sector in which the scale of supersymmetry breaking and the scale generating inflation are related. It is shown that these models have an ''entropy crisis'' in which there is a large entropy release after nucleosynthesis leading to unacceptable low nuclear abundances. (orig.)
M. T. Kiefer; S. Zhong; W. E. Heilman; J. J. Charney; X. Bian
2013-01-01
Efforts to develop a canopy flow modeling system based on the Advanced Regional Prediction System (ARPS) model are discussed. The standard version of ARPS is modified to account for the effect of drag forces on mean and turbulent flow through a vegetation canopy, via production and sink terms in the momentum and subgrid-scale turbulent kinetic energy (TKE) equations....
Subin, Z. M.; Sulman, B. N.; Malyshev, S.; Shevliakova, E.
2013-12-01
Soil moisture is a crucial control on surface energy fluxes, vegetation properties, and soil carbon cycling. Its interactions with ecosystem processes are highly nonlinear across a large range, as both drought stress and anoxia can impede vegetation and microbial growth. Earth System Models (ESMs) generally only represent an average soil-moisture state in grid cells at scales of 50-200 km, and as a result are not able to adequately represent the effects of subgrid heterogeneity in soil moisture, especially in regions with large wetland areas. We addressed this deficiency by developing the first ESM-coupled subgrid hillslope-hydrological model, TiHy (Tiled-hillslope Hydrology), embedded within the Geophysical Fluid Dynamics Laboratory (GFDL) land model. In each grid cell, one or more representative hillslope geometries are discretized into land model tiles along an upland-to-lowland gradient. These geometries represent ~1 km hillslope-scale hydrological features and allow for flexible representation of hillslope profile and plan shapes, in addition to variation of subsurface properties among or within hillslopes. Each tile (which may represent ~100 m along the hillslope) has its own surface fluxes, vegetation state, and vertically-resolved state variables for soil physics and biogeochemistry. Resolution of water state in deep layers (~200 m) down to bedrock allows for physical integration of groundwater transport with unsaturated overlying dynamics. Multiple tiles can also co-exist at the same vertical position along the hillslope, allowing the simulation of ecosystem heterogeneity due to disturbance. The hydrological model is coupled to the vertically-resolved Carbon, Organisms, Respiration, and Protection in the Soil Environment (CORPSE) model, which captures non-linearity resulting from interactions between vertically-heterogeneous soil carbon and water profiles. We present comparisons of simulated water table depth to observations. We examine sensitivities to
Thober, S.; Mizukami, N.; Samaniego, L. E.; Attinger, S.; Clark, M. P.; Cuntz, M.
2016-12-01
Land-surface models use a variety of process representations to calculate terrestrial energy, water and biogeochemical fluxes. These process descriptions are usually derived from point measurements but are scaled to much larger resolutions in applications that range from about 1 km in catchment hydrology to 100 km in climate modelling. Both, hydrologic and climate models are nowadays run on different spatial resolutions, using the exact same land surface representations. A fundamental criterion for the physical consistency of land-surface simulations across scales is that a flux estimated over a given area is independent of the spatial model resolution (i.e., the flux-matching criterion). The Noah-MP land surface model considers only one soil and land cover type per model grid cell without any representation of subgrid variability, implying a weak flux-matching. A fractional approach simulates subgrid variability but it requires a higher computational demand than using effective parameters and it is used only for land cover in current land surface schemes. A promising approach to derive scale-independent parameters is the Multiscale Parameter Regionalization (MPR) technique, which consists of two steps: first, it applies transfer functions directly to high-resolution data (such as 100 m soil maps) to derive high-resolution model parameter fields, acknowledging the full subgrid variability. Second, it upscales these high-resolution parameter fields to the model resolution by using appropriate upscaling operators. MPR has shown to improve substantially the scalability of hydrologic models. Here, we apply the MPR technique to the Noah-MP land-surface model for a large sample of basins distributed across the contiguous USA. Specifically, we evaluate the flux-matching criterion for several hydrologic fluxes such as evapotranspiration and total runoff at scales ranging from 3 km to 48 km. We also investigate a p-norm scaling operator that goes beyond the current
Multi-scale modeling of composites
DEFF Research Database (Denmark)
Azizi, Reza
A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....
Nicholas, A. P.; Ashworth, P. J.; Best, J.; Lane, S. N.; Parsons, D. R.; Sambrook Smith, G.; Simpson, C.; Strick, R. J. P.; Unsworth, C. A.
2017-12-01
Recent years have seen significant advances in the development and application of morphodynamic models to simulate river evolution. Despite this progress, significant challenges remain to be overcome before such models can provide realistic simulations of river response to environmental change, or be used to determine the controls on alluvial channel patterns and deposits with confidence. This impasse reflects a wide range of factors, not least the fact that many of the processes that control river behaviour operate at spatial scales that cannot be resolved by such models. For example, sand-bed rivers are characterised by multiple scales of topography (e.g., dunes, bars, channels), the finest of which must often by parameterized, rather than represented explicitly in morphodynamic models. We examine these issues using a combination of numerical modeling and field observations. High-resolution aerial imagery and Digital Elevation Models obtained for the sandy braided South Saskatchewan River in Canada are used to quantify dune, bar and channel morphology and their response to changing flow discharge. Numerical simulations are carried out using an existing morphodynamic model based on the 2D shallow water equations, coupled with new parameterisations of the evolution and influence of alluvial bedforms. We quantify the spatial patterns of sediment flux using repeat images of dune migration and bar evolution. These data are used to evaluate model predictions of sediment transport and morphological change, and to assess the degree to which model performance is controlled by the parametrization of roughness and sediment transport phenomena linked to subgrid-scale bedforms (dunes). The capacity of such models to replicate the characteristic multi-scale morphology of bars in sand-bed rivers, and the contrasting morphodynamic signatures of braiding during low and high flow conditions, is also assessed.
International Nuclear Information System (INIS)
Koohkan, Mohammad Reza
2012-01-01
Data assimilation in geophysical sciences aims at optimally estimating the state of the system or some parameters of the system's physical model. To do so, data assimilation needs three types of information: observations and background information, a physical/numerical model, and some statistical description that prescribes uncertainties to each component of the system. In my dissertation, new methodologies of data assimilation are used in atmospheric chemistry and physics: the joint use of a 4D-Var with a sub-grid statistical model to consistently account for representativeness errors, accounting for multiple scale in the BLUE estimation principle, and a better estimation of prior errors using objective estimation of hyper-parameters. These three approaches will be specifically applied to inverse modelling problems focusing on the emission fields of tracers or pollutants. First, in order to estimate the emission inventories of carbon monoxide over France, in-situ stations which are impacted by the representativeness errors are used. A sub-grid model is introduced and coupled with a 4D-Var to reduce the representativeness error. Indeed, the results of inverse modelling showed that the 4D-Var routine was not fit to handle the representativeness issues. The coupled data assimilation system led to a much better representation of the CO concentration variability, with a significant improvement of statistical indicators, and more consistent estimation of the CO emission inventory. Second, the evaluation of the potential of the IMS (International Monitoring System) radionuclide network is performed for the inversion of an accidental source. In order to assess the performance of the global network, a multi-scale adaptive grid is optimised using a criterion based on degrees of freedom for the signal (DFS). The results show that several specific regions remain poorly observed by the IMS network. Finally, the inversion of the surface fluxes of Volatile Organic Compounds
On scaling of human body models
Directory of Open Access Journals (Sweden)
Hynčík L.
2007-10-01
Full Text Available Human body is not an unique being, everyone is another from the point of view of anthropometry and mechanical characteristics which means that division of the human body population to categories like 5%-tile, 50%-tile and 95%-tile from the application point of view is not enough. On the other hand, the development of a particular human body model for all of us is not possible. That is why scaling and morphing algorithms has started to be developed. The current work describes the development of a tool for scaling of the human models. The idea is to have one (or couple of standard model(s as a base and to create other models based on these basic models. One has to choose adequate anthropometrical and biomechanical parameters that describe given group of humans to be scaled and morphed among.
Correction of Excessive Precipitation over Steep Mountains in a General Circulation Model (GCM)
Chao, Winston C.
2012-01-01
Excessive precipitation over steep and high mountains (EPSM) is a well-known problem in GCMs and regional climate models even at a resolution as high as 19km. The affected regions include the Andes, the Himalayas, Sierra Madre, New Guinea and others. This problem also shows up in some data assimilation products. Among the possible causes investigated in this study, we found that the most important one, by far, is a missing upward transport of heat out of the boundary layer due to the vertical circulations forced by the daytime subgrid-scale upslope winds, which in turn is forced by heated boundary layer on the slopes. These upslope winds are associated with large subgrid-scale topographic variance, which is found over steep mountains. Without such subgrid-scale heat ventilation, the resolvable-scale upslope flow in the boundary layer generated by surface sensible heat flux along the mountain slopes is excessive. Such an excessive resolvable-scale upslope flow in the boundary layer combined with the high moisture content in the boundary layer results in excessive moisture transport toward mountaintops, which in turn gives rise to excessive precipitation over the affected regions. We have parameterized the effects of subgrid-scale heated-slope-induced vertical circulation (SHVC) by removing heat from the boundary layer and depositing it in the layers higher up when topographic variance exceeds a critical value. Test results using NASA/Goddard's GEOS-5 GCM have shown that the EPSM problem is largely solved.
Multi-scale Modeling of Arctic Clouds
Hillman, B. R.; Roesler, E. L.; Dexheimer, D.
2017-12-01
The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.
Site-Scale Saturated Zone Flow Model
International Nuclear Information System (INIS)
G. Zyvoloski
2003-01-01
The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca Mountain using FEHM V 2.20 are being
Surface drag effects on simulated wind fields in high-resolution atmospheric forecast model
Energy Technology Data Exchange (ETDEWEB)
Lim, Kyo Sun; Lim, Jong Myoung; Ji, Young Yong [Environmental Radioactivity Assessment Team,Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shin, Hye Yum [NOAA/Geophysical Fluid Dynamics Laboratory, Princeton (United States); Hong, Jin Kyu [Yonsei University, Seoul (Korea, Republic of)
2017-04-15
It has been reported that the Weather Research and Forecasting (WRF) model generally shows a substantial over prediction bias at low to moderate wind speeds and winds are too geostrophic (Cheng and Steenburgh 2005), which limits the application of WRF model in the area that requires the accurate surface wind estimation such as wind-energy application, air-quality studies, and radioactive-pollutants dispersion studies. The surface drag generated by the subgrid-scale orography is represented by introducing a sink term in the momentum equation in their studies. The purpose of our study is to evaluate the simulated meteorological fields in the high-resolution WRF framework, that includes the parameterization of subgrid-scale orography developed by Mass and Ovens (2010), and enhance the forecast skill of low-level wind fields, which plays an important role in transport and dispersion of air pollutants including radioactive pollutants. The positive bias in 10-m wind speed is significantly alleviated by implementing the subgrid-scale orography parameterization, while other meteorological fields including 10-m wind direction are not changed. Increased variance of subgrid- scale orography enhances the sink of momentum and further reduces the bias in 10-m wind speed.
Design of scaled down structural models
Simitses, George J.
1994-07-01
In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.
Comments on intermediate-scale models
Energy Technology Data Exchange (ETDEWEB)
Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.
1987-04-23
Some superstring-inspired models employ intermediate scales m/sub I/ of gauge symmetry breaking. Such scales should exceed 10/sup 16/ GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m/sub I/. However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m/sub W/), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m/sub I/. We also comment on the danger of baryon and lepton number violation in the effective low-energy theory.
Comments on intermediate-scale models
International Nuclear Information System (INIS)
Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.
1987-01-01
Some superstring-inspired models employ intermediate scales m I of gauge symmetry breaking. Such scales should exceed 10 16 GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m I . However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m W ), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m I . We also comment on the danger of baryon and lepton number violation in the effective low-energy theory. (orig.)
Managing large-scale models: DBS
International Nuclear Information System (INIS)
1981-05-01
A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases
Scaled Experimental Modeling of VHTR Plenum Flows
Energy Technology Data Exchange (ETDEWEB)
ICONE 15
2007-04-01
Abstract The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. Various scaled heated gas and water flow facilities were investigated for modeling VHTR upper and lower plenum flows during the decay heat portion of a pressurized conduction-cooldown scenario and for modeling thermal mixing and stratification (“thermal striping”) in the lower plenum during normal operation. It was concluded, based on phenomena scaling and instrumentation and other practical considerations, that a heated water flow scale model facility is preferable to a heated gas flow facility and to unheated facilities which use fluids with ranges of density to simulate the density effect of heating. For a heated water flow lower plenum model, both the Richardson numbers and Reynolds numbers may be approximately matched for conduction-cooldown natural circulation conditions. Thermal mixing during normal operation may be simulated but at lower, but still fully turbulent, Reynolds numbers than in the prototype. Natural circulation flows in the upper plenum may also be simulated in a separate heated water flow facility that uses the same plumbing as the lower plenum model. However, Reynolds number scaling distortions will occur at matching Richardson numbers due primarily to the necessity of using a reduced number of channels connected to the plenum than in the prototype (which has approximately 11,000 core channels connected to the upper plenum) in an otherwise geometrically scaled model. Experiments conducted in either or both facilities will meet the objectives of providing benchmark data for the validation of codes proposed for NGNP designs and safety studies, as well as providing a better understanding of the complex flow phenomena in the plenums.
Biointerface dynamics--Multi scale modeling considerations.
Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko
2015-08-01
Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.
Complex scaling in the cluster model
International Nuclear Information System (INIS)
Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.
1987-01-01
To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs
Geometrical scaling vs factorizable eikonal models
Kiang, D
1975-01-01
Among various theoretical explanations or interpretations for the experimental data on the differential cross-sections of elastic proton-proton scattering at CERN ISR, the following two seem to be most remarkable: A) the excellent agreement of the Chou-Yang model prediction of d sigma /dt with data at square root s=53 GeV, B) the general manifestation of geometrical scaling (GS). The paper confronts GS with eikonal models with factorizable opaqueness, with special emphasis on the Chou-Yang model. (12 refs).
Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs.
Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Cheng, A.
2017-12-01
A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity, and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation, and cloudiness. Unlike other similar methods, comparatively few new prognostic variables needs to be introduced, making the technique computationally efficient. In the base version of SHOC it is SGS turbulent kinetic energy (TKE), and in the developmental version — SGS TKE, and variances of total water and moist static energy (MSE). SHOC is now incorporated into a version of GFS that will become a part of the NOAA Next Generation Global Prediction System based around NOAA GFDL's FV3 dynamical core, NOAA Environmental Modeling System (NEMS) coupled modeling infrastructure software, and a set novel physical parameterizations. Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these quantities. Radiative transfer parameterization uses cloudiness computed by SHOC. An outstanding problem with implementation of SHOC in the NCEP global models is excessively large high level tropical cloudiness. Comparison of the moments of the SGS PDF diagnosed by SHOC to the moments calculated in a GigaLES simulation of tropical deep convection case (GATE), shows that SHOC diagnoses too narrow PDF distributions of total cloud water and MSE in the areas of deep convective detrainment. A subsequent sensitivity study of SHOC's diagnosed cloud fraction (CF) to higher order input moments of the SGS PDF
Probabilistic, meso-scale flood loss modelling
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2016-04-01
Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.
Energy Technology Data Exchange (ETDEWEB)
C.R. Bryan
2005-02-17
The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral
International Nuclear Information System (INIS)
C.R. Bryan
2005-01-01
The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC submodel uses a drift-scale
Scale Model Thruster Acoustic Measurement Results
Vargas, Magda; Kenny, R. Jeremy
2013-01-01
The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.
1/3-scale model testing program
International Nuclear Information System (INIS)
Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.
1989-01-01
This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system
Genome scale metabolic modeling of cancer
DEFF Research Database (Denmark)
Nilsson, Avlant; Nielsen, Jens
2017-01-01
of metabolism which allows simulation and hypotheses testing of metabolic strategies. It has successfully been applied to many microorganisms and is now used to study cancer metabolism. Generic models of human metabolism have been reconstructed based on the existence of metabolic genes in the human genome......Cancer cells reprogram metabolism to support rapid proliferation and survival. Energy metabolism is particularly important for growth and genes encoding enzymes involved in energy metabolism are frequently altered in cancer cells. A genome scale metabolic model (GEM) is a mathematical formalization...
Large-scale multimedia modeling applications
International Nuclear Information System (INIS)
Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.
1995-08-01
Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications
Draper, Martin; Usera, Gabriel
2015-04-01
The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of
Aerosol numerical modelling at local scale
International Nuclear Information System (INIS)
Albriet, Bastien
2007-01-01
At local scale and in urban areas, an important part of particulate pollution is due to traffic. It contributes largely to the high number concentrations observed. Two aerosol sources are mainly linked to traffic. Primary emission of soot particles and secondary nanoparticle formation by nucleation. The emissions and mechanisms leading to the formation of such bimodal distribution are still badly understood nowadays. In this thesis, we try to provide an answer to this problematic by numerical modelling. The Modal Aerosol Model MAM is used, coupled with two 3D-codes: a CFD (Mercure Saturne) and a CTM (Polair3D). A sensitivity analysis is performed, at the border of a road but also in the first meters of an exhaust plume, to identify the role of each process involved and the sensitivity of different parameters used in the modelling. (author) [fr
Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.
2015-12-01
Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are
Multi-scale Modelling of Segmentation
DEFF Research Database (Denmark)
Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri
2016-01-01
pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...
Feng, S.; Li, Z.; Liu, Y.; Lin, W.; Toto, T.; Vogelmann, A. M.; Fridlind, A. M.
2013-12-01
We present an approach to derive large-scale forcing that is used to drive single-column models (SCMs) and cloud resolving models (CRMs)/large eddy simulation (LES) for evaluating fast physics parameterizations in climate models. The forcing fields are derived by use of a newly developed multi-scale data assimilation (MS-DA) system. This DA system is developed on top of the NCEP Gridpoint Statistical Interpolation (GSI) System and is implemented in the Weather Research and Forecasting (WRF) model at a cloud resolving resolution of 2 km. This approach has been applied to the generation of large scale forcing for a set of Intensive Operation Periods (IOPs) over the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plains (SGP) site. The dense ARM in-situ observations and high-resolution satellite data effectively constrain the WRF model. The evaluation shows that the derived forcing displays accuracies comparable to the existing continuous forcing product and, overall, a better dynamic consistency with observed cloud and precipitation. One important application of this approach is to derive large-scale hydrometeor forcing and multiscale forcing, which is not provided in the existing continuous forcing product. It is shown that the hydrometeor forcing poses an appreciable impact on cloud and precipitation fields in the single-column model simulations. The large-scale forcing exhibits a significant dependency on domain-size that represents SCM grid-sizes. Subgrid processes often contribute a significant component to the large-scale forcing, and this contribution is sensitive to the grid-size and cloud-regime.
A satellite simulator for TRMM PR applied to climate model simulations
Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.
2017-12-01
Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.
Molecular scale modeling of polymer imprint nanolithography.
Chandross, Michael; Grest, Gary S
2012-01-10
We present the results of large-scale molecular dynamics simulations of two different nanolithographic processes, step-flash imprint lithography (SFIL), and hot embossing. We insert rigid stamps into an entangled bead-spring polymer melt above the glass transition temperature. After equilibration, the polymer is then hardened in one of two ways, depending on the specific process to be modeled. For SFIL, we cross-link the polymer chains by introducing bonds between neighboring beads. To model hot embossing, we instead cool the melt to below the glass transition temperature. We then study the ability of these methods to retain features by removing the stamps, both with a zero-stress removal process in which stamp atoms are instantaneously deleted from the system as well as a more physical process in which the stamp is pulled from the hardened polymer at fixed velocity. We find that it is necessary to coat the stamp with an antifriction coating to achieve clean removal of the stamp. We further find that a high density of cross-links is necessary for good feature retention in the SFIL process. The hot embossing process results in good feature retention at all length scales studied as long as coated, low surface energy stamps are used.
Log-Normal Turbulence Dissipation in Global Ocean Models
Pearson, Brodie; Fox-Kemper, Baylor
2018-03-01
Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.
Air quality impact of two power plants using a sub-grid
International Nuclear Information System (INIS)
Drevet, Jerome; Musson-Genon, Luc
2012-01-01
Modeling point source emissions of air pollutants with regional Eulerian models is likely to lead to errors because a 3D Eulerian model is not able to correctly reproduce the evolution of a plume near its source. To overcome these difficulties, we applied a Gaussian puff model - imbedded within a 3D Eulerian model - for an impact assessment of EDF fossil fuel-fired power plants of Porcheville and Vitry, Ile-de-France. We simulated an entire year of atmospheric processes for an area covering the Paris region with the Polyphemus platform with which we conducted various scenarios with or without a Gaussian puff model, referred as Plume-in-grid, to independently handle 'with major point source emissions in Ile-de-France. Our study focuses on four chemical compounds (NO, NO 2 , SO 2 and O 3 ). The use of a Gaussian model is important, particularly for primary compounds with low reactivity such as SO, especially as industrial stacks are the major source of its emissions. SO 2 concentrations simulated using Plume-in-grid tare closer to the concentrations measured by the stations of the air quality agencies (Associations Agreees de Surveillance de la Qualite de l'Air, AASQA), although they remain largely overestimated. The use of a Gaussian model increases the concentrations near the source and lowers background levels of various chemical species (except O 3 ). The simulated concentrations may vary by over 30 % depending on whether we consider the Gaussian model for primary compounds such as SO 2 and NO, and around 2 % for secondary compounds such as NO 2 and O 3 . Regarding the impact of fossil fuel-fired power plants, simulated concentrations are increased by about 1 μg/m 3 approximately for SO 2 annual averages close to the Porcheville stack and are lowered by about 0.5 μg/m 3 far from the sources, highlighting the less diffusive character of the Gaussian model by comparison with the Eulerian model. The integration of a sub-grid Gaussian model offers the advantage of
International Nuclear Information System (INIS)
Liu Lianshou; Zhang Yang; Wu Yuanfang
1996-01-01
The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)
A high resolution global scale groundwater model
de Graaf, Inge; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc
2014-05-01
As the world's largest accessible source of freshwater, groundwater plays a vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater storage provides a large natural buffer against water shortage and sustains flows to rivers and wetlands, supporting ecosystem habitats and biodiversity. Yet, the current generation of global scale hydrological models (GHMs) do not include a groundwater flow component, although it is a crucial part of the hydrological cycle. Thus, a realistic physical representation of the groundwater system that allows for the simulation of groundwater head dynamics and lateral flows is essential for GHMs that increasingly run at finer resolution. In this study we present a global groundwater model with a resolution of 5 arc-minutes (approximately 10 km at the equator) using MODFLOW (McDonald and Harbaugh, 1988). With this global groundwater model we eventually intend to simulate the changes in the groundwater system over time that result from variations in recharge and abstraction. Aquifer schematization and properties of this groundwater model were developed from available global lithological maps and datasets (Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moosdorf, 2013), combined with our estimate of aquifer thickness for sedimentary basins. We forced the groundwater model with the output from the global hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the net groundwater recharge and average surface water levels derived from routed channel discharge. For the parameterization, we relied entirely on available global datasets and did not calibrate the model so that it can equally be expanded to data poor environments. Based on our sensitivity analysis, in which we run the model with various hydrogeological parameter settings, we observed that most variance in groundwater
Statistics of the Navier–Stokes-alpha-beta regularization model for fluid turbulence
International Nuclear Information System (INIS)
Hinz, Denis F; Kim, Tae-Yeon; Fried, Eliot
2014-01-01
We explore one-point and two-point statistics of the Navier–Stokes-αβ regularization model at moderate Reynolds number (Re ≈ 200) in homogeneous isotropic turbulence. The results are compared to the limit cases of the Navier–Stokes-α model and the Navier–Stokes-αβ model without subgrid-scale stress, as well as with high-resolution direct numerical simulation. After reviewing spectra of different energy norms of the Navier–Stokes-αβ model, the Navier–Stokes-α model, and Navier–Stokes-αβ model without subgrid-scale stress, we present probability density functions and normalized probability density functions of the filtered and unfiltered velocity increments along with longitudinal velocity structure functions of the regularization models and direct numerical simulation results. We highlight differences in the statistical properties of the unfiltered and filtered velocity fields entering the governing equations of the Navier–Stokes-α and Navier–Stokes-αβ models and discuss the usability of both velocity fields for realistic flow predictions. The influence of the modified viscous term in the Navier–Stokes-αβ model is studied through comparison to the case where the underlying subgrid-scale stress tensor is neglected. Whereas, the filtered velocity field is found to have physically more viable probability density functions and structure functions for the approximation of direct numerical simulation results, the unfiltered velocity field is found to have flatness factors close to direct numerical simulation results. (paper)
Integrated multi-scale modelling and simulation of nuclear fuels
International Nuclear Information System (INIS)
Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.
2015-01-01
This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)
Scaling and constitutive relationships in downcomer modeling
International Nuclear Information System (INIS)
Daly, B.J.; Harlow, F.H.
1978-12-01
Constitutive relationships to describe mass and momentum exchange in multiphase flow in a pressurized water reactor downcomer are presented. Momentum exchange between the phases is described by the product of the flux of momentum available for exchange and the effective area for interaction. The exchange of mass through condensation is assumed to occur along a distinct condensation boundary separating steam at saturation temperature from water in which the temperature falls off roughly linearly with distance from the boundary. Because of the abundance of nucleation sites in a typical churning flow in a downcomer, we propose an equilibrium evaporation process that produces sufficient steam per unit time to keep the water perpetually cooled to the saturation temperature. The transport equations, constitutive models, and boundary conditions used in the K-TIF numerical method are nondimensionalized to obtain scaling relationships for two-phase flow in the downcomer. The results indicate that, subject to idealized thermodynamic and hydraulic constraints, exact mathematical scaling can be achieved. Experiments are proposed to isolate the effects of parameters that contribute to mass, momentum, and energy exchange between the phases
Cavitation erosion - scale effect and model investigations
Geiger, F.; Rutschmann, P.
2015-12-01
The experimental works presented in here contribute to the clarification of erosive effects of hydrodynamic cavitation. Comprehensive cavitation erosion test series were conducted for transient cloud cavitation in the shear layer of prismatic bodies. The erosion pattern and erosion rates were determined with a mineral based volume loss technique and with a metal based pit count system competitively. The results clarified the underlying scale effects and revealed a strong non-linear material dependency, which indicated significantly different damage processes for both material types. Furthermore, the size and dynamics of the cavitation clouds have been assessed by optical detection. The fluctuations of the cloud sizes showed a maximum value for those cavitation numbers related to maximum erosive aggressiveness. The finding suggests the suitability of a model approach which relates the erosion process to cavitation cloud dynamics. An enhanced experimental setup is projected to further clarify these issues.
Anomalous scaling of structure functions and dynamic constraints on turbulence simulations
International Nuclear Information System (INIS)
Yakhot, Victor; Sreenivasan, Katepalli R.
2006-12-01
The connection between anomalous scaling of structure functions (intermittency) and numerical methods for turbulence simulations is discussed. It is argued that the computational work for direct numerical simulations (DNS) of fully developed turbulence increases as Re 4 , and not as Re 3 expected from Kolmogorov's theory, where Re is a large-scale Reynolds number. Various relations for the moments of acceleration and velocity derivatives are derived. An infinite set of exact constraints on dynamically consistent subgrid models for Large Eddy Simulations (LES) is derived from the Navier-Stokes equations, and some problems of principle associated with existing LES models are highlighted. (author)
Comparison Between Overtopping Discharge in Small and Large Scale Models
DEFF Research Database (Denmark)
Helgason, Einar; Burcharth, Hans F.
2006-01-01
The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...
Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications
DEFF Research Database (Denmark)
Liu, Yubao; Warner, Tom; Liu, Yuewei
2011-01-01
This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center f...
Models of Small-Scale Patchiness
McGillicuddy, D. J.
2001-01-01
Patchiness is perhaps the most salient characteristic of plankton populations in the ocean. The scale of this heterogeneity spans many orders of magnitude in its spatial extent, ranging from planetary down to microscale. It has been argued that patchiness plays a fundamental role in the functioning of marine ecosystems, insofar as the mean conditions may not reflect the environment to which organisms are adapted. Understanding the nature of this patchiness is thus one of the major challenges of oceanographic ecology. The patchiness problem is fundamentally one of physical-biological-chemical interactions. This interconnection arises from three basic sources: (1) ocean currents continually redistribute dissolved and suspended constituents by advection; (2) space-time fluctuations in the flows themselves impact biological and chemical processes, and (3) organisms are capable of directed motion through the water. This tripartite linkage poses a difficult challenge to understanding oceanic ecosystems: differentiation between the three sources of variability requires accurate assessment of property distributions in space and time, in addition to detailed knowledge of organismal repertoires and the processes by which ambient conditions control the rates of biological and chemical reactions. Various methods of observing the ocean tend to lie parallel to the axes of the space/time domain in which these physical-biological-chemical interactions take place. Given that a purely observational approach to the patchiness problem is not tractable with finite resources, the coupling of models with observations offers an alternative which provides a context for synthesis of sparse data with articulations of fundamental principles assumed to govern functionality of the system. In a sense, models can be used to fill the gaps in the space/time domain, yielding a framework for exploring the controls on spatially and temporally intermittent processes. The following discussion highlights
Large scale injection test (LASGIT) modelling
International Nuclear Information System (INIS)
Arnedo, D.; Olivella, S.; Alonso, E.E.
2010-01-01
Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug
Effect of LES models on the entrainment characteristics in a turbulent planar jet
Chambel Lopes, Diogo; da Silva, Carlos; Raman, Venkat
2012-11-01
The effect of subgrid-scale (SGS) models in the jet spreading rate and centreline passive scalar decay rates are assessed and compared. The modelling of the subgrid-scale fluxes is particularly challenging in the turbulent/nonturbulent (T/NT) region that divides the two regions in the jet flow: the outer region where the flow is irrotational and the inner region where the flow is turbulent: it has been shown that important Reynolds stresses exist near the T/NT interface and that these stresses determine in part the mixing and combustion rates in jets. In this work direct and large-eddy simulations (DNS/LES) of turbulent planar jets are used to study the role of subgrid-scale models in the integral characteristics of the passive scalar mixing in a jet. LES show that different SGS modes lead to different spreading rates for the velocity and scalar fields, and the scalar quantities are more affected than the velocity e.g. SGS models affect strongly the centreline mean scalar decay than the centreline mean velocity decay. The results suggest the need for a minimum resolution close to the Taylor micro-scale in order to recover the correct results for the integral quantities and this can be explained by recent results on the dynamics of the T/NT interface.
Modeling of micro-scale thermoacoustics
Energy Technology Data Exchange (ETDEWEB)
Offner, Avshalom [The Nancy and Stephen Grand Technion Energy Program, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Department of Civil and Environmental Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Ramon, Guy Z., E-mail: ramong@technion.ac.il [Department of Civil and Environmental Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel)
2016-05-02
Thermoacoustic phenomena, that is, onset of self-sustained oscillations or time-averaged fluxes in a sound wave, may be harnessed as efficient and robust heat transfer devices. Specifically, miniaturization of such devices holds great promise for cooling of electronics. At the required small dimensions, it is expected that non-negligible slip effects exist at the solid surface of the “stack”-a porous matrix, which is used for maintaining the correct temporal phasing of the heat transfer between the solid and oscillating gas. Here, we develop theoretical models for thermoacoustic engines and heat pumps that account for slip, within the standing-wave approximation. Stability curves for engines with both no-slip and slip boundary conditions were calculated; the slip boundary condition curve exhibits a lower temperature difference compared with the no slip curve for resonance frequencies that characterize micro-scale devices. Maximum achievable temperature differences across the stack of a heat pump were also calculated. For this case, slip conditions are detrimental and such a heat pump would maintain a lower temperature difference compared to larger devices, where slip effects are negligible.
Modeling of micro-scale thermoacoustics
International Nuclear Information System (INIS)
Offner, Avshalom; Ramon, Guy Z.
2016-01-01
Thermoacoustic phenomena, that is, onset of self-sustained oscillations or time-averaged fluxes in a sound wave, may be harnessed as efficient and robust heat transfer devices. Specifically, miniaturization of such devices holds great promise for cooling of electronics. At the required small dimensions, it is expected that non-negligible slip effects exist at the solid surface of the “stack”-a porous matrix, which is used for maintaining the correct temporal phasing of the heat transfer between the solid and oscillating gas. Here, we develop theoretical models for thermoacoustic engines and heat pumps that account for slip, within the standing-wave approximation. Stability curves for engines with both no-slip and slip boundary conditions were calculated; the slip boundary condition curve exhibits a lower temperature difference compared with the no slip curve for resonance frequencies that characterize micro-scale devices. Maximum achievable temperature differences across the stack of a heat pump were also calculated. For this case, slip conditions are detrimental and such a heat pump would maintain a lower temperature difference compared to larger devices, where slip effects are negligible.
Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.
2017-08-01
The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (twentieth century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2 m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over
Alessandri, A.; Catalano, F.; De Felice, M.; Hurk, B. V. D.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.
2017-12-01
Here we demonstrate, for the first time, that the implementation of a realistic representation of vegetation in Earth System Models (ESMs) can significantly improve climate simulation and prediction across multiple time-scales. The effective sub-grid vegetation fractional coverage vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the surface resistance to evapotranspiration, albedo, roughness lenght, and soil field capacity. To adequately represent this effect in the EC-Earth ESM, we included an exponential dependence of the vegetation cover on the Leaf Area Index.By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal (2-4 months) and weather (4 days) time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation-cover consistently correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in
Multi-Scale Models for the Scale Interaction of Organized Tropical Convection
Yang, Qiu
Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.
SDG and qualitative trend based model multiple scale validation
Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike
2017-09-01
Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.
Autonomous Operation of Hybrid Microgrid with AC and DC Sub-Grids
DEFF Research Database (Denmark)
Loh, Poh Chiang; Blaabjerg, Frede
2011-01-01
the power flow among all the sources distributed throughout the two types of sub-grids, which certainly is tougher than previous efforts developed for only either ac or dc microgrid. This wider scope of control has not yet been investigated, and would certainly rely on the coordinated operation of dc...... sources, ac sources and interlinking converters. Suitable control and normalization schemes are therefore developed for controlling them with results presented for showing the overall performance of the hybrid microgrid.......This paper investigates on the active and reactive power sharing of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac sub-grids, interconnected by power electronic interfaces. The main challenge here is to manage...
Downscaling modelling system for multi-scale air quality forecasting
Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.
2010-09-01
Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -É linear eddy-viscosity model, k - É non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a
International Nuclear Information System (INIS)
Min, Min; Zhang, Zhibo
2014-01-01
The objective of this study is to understand how cloud fraction diurnal cycle and sub-grid cloud optical thickness variability influence the all-sky direct aerosol radiative forcing (DARF). We focus on the southeast Atlantic region where transported smoke is often observed above low-level water clouds during burning seasons. We use the CALIOP observations to derive the optical properties of aerosols. We developed two diurnal cloud fraction variation models. One is based on sinusoidal fitting of MODIS observations from Terra and Aqua satellites. The other is based on high-temporal frequency diurnal cloud fraction observations from SEVIRI on board of geostationary satellite. Both models indicate a strong cloud fraction diurnal cycle over the southeast Atlantic region. Sensitivity studies indicate that using a constant cloud fraction corresponding to Aqua local equatorial crossing time (1:30 PM) generally leads to an underestimated (less positive) diurnal mean DARF even if solar diurnal variation is considered. Using cloud fraction corresponding to Terra local equatorial crossing time (10:30 AM) generally leads overestimation. The biases are a typically around 10–20%, but up to more than 50%. The influence of sub-grid cloud optical thickness variability on DARF is studied utilizing the cloud optical thickness histogram available in MODIS Level-3 daily data. Similar to previous studies, we found the above-cloud smoke in the southeast Atlantic region has a strong warming effect at the top of the atmosphere. However, because of the plane-parallel albedo bias the warming effect of above-cloud smoke could be significantly overestimated if the grid-mean, instead of the full histogram, of cloud optical thickness is used in the computation. This bias generally increases with increasing above-cloud aerosol optical thickness and sub-grid cloud optical thickness inhomogeneity. Our results suggest that the cloud diurnal cycle and sub-grid cloud variability are important factors
Verification of Simulation Results Using Scale Model Flight Test Trajectories
National Research Council Canada - National Science Library
Obermark, Jeff
2004-01-01
.... A second compromise scaling law was investigated as a possible improvement. For ejector-driven events at minimum sideslip, the most important variables for scale model construction are the mass moment of inertia and ejector...
Modelling across bioreactor scales: methods, challenges and limitations
DEFF Research Database (Denmark)
Gernaey, Krist
that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...
Modeling Lactococcus lactis using a genome-scale flux model
Directory of Open Access Journals (Sweden)
Nielsen Jens
2005-06-01
Full Text Available Abstract Background Genome-scale flux models are useful tools to represent and analyze microbial metabolism. In this work we reconstructed the metabolic network of the lactic acid bacteria Lactococcus lactis and developed a genome-scale flux model able to simulate and analyze network capabilities and whole-cell function under aerobic and anaerobic continuous cultures. Flux balance analysis (FBA and minimization of metabolic adjustment (MOMA were used as modeling frameworks. Results The metabolic network was reconstructed using the annotated genome sequence from L. lactis ssp. lactis IL1403 together with physiological and biochemical information. The established network comprised a total of 621 reactions and 509 metabolites, representing the overall metabolism of L. lactis. Experimental data reported in the literature was used to fit the model to phenotypic observations. Regulatory constraints had to be included to simulate certain metabolic features, such as the shift from homo to heterolactic fermentation. A minimal medium for in silico growth was identified, indicating the requirement of four amino acids in addition to a sugar. Remarkably, de novo biosynthesis of four other amino acids was observed even when all amino acids were supplied, which is in good agreement with experimental observations. Additionally, enhanced metabolic engineering strategies for improved diacetyl producing strains were designed. Conclusion The L. lactis metabolic network can now be used for a better understanding of lactococcal metabolic capabilities and potential, for the design of enhanced metabolic engineering strategies and for integration with other types of 'omic' data, to assist in finding new information on cellular organization and function.
Model of cosmology and particle physics at an intermediate scale
International Nuclear Information System (INIS)
Bastero-Gil, M.; Di Clemente, V.; King, S. F.
2005-01-01
We propose a model of cosmology and particle physics in which all relevant scales arise in a natural way from an intermediate string scale. We are led to assign the string scale to the intermediate scale M * ∼10 13 GeV by four independent pieces of physics: electroweak symmetry breaking; the μ parameter; the axion scale; and the neutrino mass scale. The model involves hybrid inflation with the waterfall field N being responsible for generating the μ term, the right-handed neutrino mass scale, and the Peccei-Quinn symmetry breaking scale. The large scale structure of the Universe is generated by the lightest right-handed sneutrino playing the role of a coupled curvaton. We show that the correct curvature perturbations may be successfully generated providing the lightest right-handed neutrino is weakly coupled in the seesaw mechanism, consistent with sequential dominance
Alpha-modeling strategy for LES of turbulent mixing
Geurts, Bernard J.; Holm, Darryl D.; Drikakis, D.; Geurts, B.J.
2002-01-01
The α-modeling strategy is followed to derive a new subgrid parameterization of the turbulent stress tensor in large-eddy simulation (LES). The LES-α modeling yields an explicitly filtered subgrid parameterization which contains the filtered nonlinear gradient model as well as a model which
Directory of Open Access Journals (Sweden)
A. Endalamaw
2017-09-01
Full Text Available Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW in Interior Alaska: one nearly permafrost-free (LowP sub-basin and one permafrost-dominated (HighP sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC mesoscale hydrological model to simulate runoff, evapotranspiration (ET, and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub
Embedding complex hydrology in the climate system - towards fully coupled climate-hydrology models
DEFF Research Database (Denmark)
Butts, M.; Rasmussen, S.H.; Ridler, M.
2013-01-01
Motivated by the need to develop better tools to understand the impact of future management and climate change on water resources, we present a set of studies with the overall aim of developing a fully dynamic coupling between a comprehensive hydrological model, MIKE SHE, and a regional climate...... distributed parameters using satellite remote sensing. Secondly, field data are used to investigate the effects of model resolution and parameter scales for use in a coupled model. Finally, the development of the fully coupled climate-hydrology model is described and some of the challenges associated...... with coupling models for hydrological processes on sub-grid scales of the regional climate model are presented....
Noise magnetic Barkahausen: modeling and scale
International Nuclear Information System (INIS)
Rodríguez-Pérez, Jorge L.; Pérez Benítez, José A.
2008-01-01
Noise magnetic Barkahausen of produces due to network defaults, and is reflected in abrupt changes that take place in the magnetization of the material in Studio. This fact presupposes a complexity, according to the various factors that influence its occurrence and internal changes in the system. A study of noise are used in three fundamental quantities: length the signal, the area under the curve and the energy of the signal; from these other quantities that are used often are defined: the square root mean (average-quadratic voltage) signal and the amplitude of the signal (maximum peak voltage). This form of investigate the phenomenon assumes a statistical analysis of the behaviour of the signal as a result of a set of changes that occur in the material, showing the complexity of the system and the importance of the laws of scale. This paper investigates the relationship between noise magnetic Barkahausen, laws of scale and complexity using structural steel ATSM 36 samples that have been subjected to mechanical deformations by traction and compression. For it's performed a statistical analysis to determine the complexity from the Test-appointment and reported the values of fundamental quantities and laws of scale for different deformation, resulting in the unit which shows the connection between the values of the voltage quadratic medium, the depth of the sample, the characteristics of the laws of scale and complexity: a pseudo random system.
Drikakis, D.
2002-07-01
The paper describes the use of numerical methods for hyperbolic conservation laws as an embedded turbulence modelling approach. Different Godunov-type schemes are utilized in computations of Burgers' turbulence and a two-dimensional mixing layer. The schemes include a total variation diminishing, characteristic-based scheme which is developed in this paper using the flux limiter approach. The embedded turbulence modelling property of the above methods is demonstrated through coarsely resolved large eddy simulations with and without subgrid scale models. Copyright
The Goddard multi-scale modeling system with unified physics
Directory of Open Access Journals (Sweden)
W.-K. Tao
2009-08-01
Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.
This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.
Microphysics in Multi-scale Modeling System with Unified Physics
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
Scaling considerations for modeling the in situ vitrification process
International Nuclear Information System (INIS)
Langerman, M.A.; MacKinnon, R.J.
1990-09-01
Scaling relationships for modeling the in situ vitrification waste remediation process are documented based upon similarity considerations derived from fundamental principles. Requirements for maintaining temperature and electric potential field similarity between the model and the prototype are determined as well as requirements for maintaining similarity in off-gas generation rates. A scaling rationale for designing reduced-scale experiments is presented and the results are assessed numerically. 9 refs., 6 figs
Using LISREL to Evaluate Measurement Models and Scale Reliability.
Fleishman, John; Benson, Jeri
1987-01-01
LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…
Coulomb-gas scaling, superfluid films, and the XY model
International Nuclear Information System (INIS)
Minnhagen, P.; Nylen, M.
1985-01-01
Coulomb-gas-scaling ideas are invoked as a link between the superfluid density of two-dimensional 4 He films and the XY model; the Coulomb-gas-scaling function epsilon(X) is extracted from experiments and is compared with Monte Carlo simulations of the XY model. The agreement is found to be excellent
Measurement and Modelling of Scaling Minerals
DEFF Research Database (Denmark)
Villafafila Garcia, Ada
2005-01-01
-liquid equilibrium of sulphate scaling minerals (SrSO4, BaSO4, CaSO4 and CaSO4•2H2O) at temperatures up to 300ºC and pressures up to 1000 bar is described in chapter 4. Results for the binary systems (M2+, )-H2O; the ternary systems (Na+, M2+, )-H2O, and (Na+, M2+, Cl-)-H2O; and the quaternary systems (Na+, M2+)(Cl......-, )-H2O, are presented. M2+ stands for Ba2+, Ca2+, or Sr2+. Chapter 5 is devoted to the correlation and prediction of vapour-liquid-solid equilibria for different carbonate systems causing scale problems (CaCO3, BaCO3, SrCO3, and MgCO3), covering the temperature range from 0 to 250ºC and pressures up......-NaCl-Na2SO4-H2O are given. M2+ stands for Ca2+, Mg2+, Ba2+, and Sr2+. This chapter also includes an analysis of the CaCO3-MgCO3-CO2-H2O system. Chapter 6 deals with the system NaCl-H2O. Available data for that system at high temperatures and/or pressures are addressed, and sodium chloride solubility...
Macro scale models for freight railroad terminals.
2016-03-02
The project has developed a yard capacity model for macro-level analysis. The study considers the detailed sequence and scheduling in classification yards and their impacts on yard capacities simulate typical freight railroad terminals, and statistic...
Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution
Rajulapati, C. R.; Mujumdar, P. P.
2017-12-01
Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.
Scale gauge symmetry and the standard model
International Nuclear Information System (INIS)
Sola, J.
1990-01-01
This paper speculates on a version of the standard model of the electroweak and strong interactions coupled to gravity and equipped with a spontaneously broken, anomalous, conformal gauge symmetry. The scalar sector is virtually absent in the minimal model but in the general case it shows up in the form of a nonlinear harmonic map Lagrangian. A Euclidean approach to the phenological constant problem is also addressed in this framework
Large-scale modelling of neuronal systems
International Nuclear Information System (INIS)
Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.
2009-01-01
The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.
Multi-scale modeling for sustainable chemical production.
Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J
2013-09-01
With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The use of scale models in impact testing
International Nuclear Information System (INIS)
Donelan, P.J.; Dowling, A.R.
1985-01-01
Theoretical analysis, component testing and model flask testing are employed to investigate the validity of scale models for demonstrating the behaviour of Magnox flasks under impact conditions. Model testing is shown to be a powerful and convenient tool provided adequate care is taken with detail design and manufacture of models and with experimental control. (author)
Scale model helps Duke untie construction snags
International Nuclear Information System (INIS)
Anon.
1977-01-01
A nuclear power plant model, only 60 percent complete, has helped Duke Power identify over 150 major design interferences, which, when resolved, will help cut capital expense and eliminate scheduling problems that normally crop up as revisions are made during actual plant construction. The model has been used by construction, steam production, and design personnel to recommend changes that should improve material handling, operations, and maintenance procedures as well as simplifying piping and cabling. The company has already saved many man-hours in material take-off, material management, and detailed drafting and expects to save even more with greater use of, and improvement in, its modeling program. Duke's modeling program was authorized and became operational in November 1974, with the first model to be the Catawba Nuclear Station. This plant is a two-unit station using Westinghouse nuclear steam supply systems in tandem with General Electric turbine-generators, horizontal feedwater heaters, and Foster Wheeler triple pressure condensers. Each unit is rated 1142 MWe
Planck-scale corrections to axion models
International Nuclear Information System (INIS)
Barr, S.M.; Seckel, D.
1992-01-01
It has been argued that quantum gravitational effects will violate all nonlocal symmetries. Peccei-Quinn symmetries must therefore be an ''accidental'' or automatic consequence of local gauge symmetry. Moreover, higher-dimensional operators suppressed by powers of M Pl are expected to explicitly violate the Peccei-Quinn symmetry. Unless these operators are of dimension d≥10, axion models do not solve the strong CP problem in a natural fashion. A small gravitationally induced contribution to the axion mass has little if any effect on the density of relic axions. If d=10, 11, or 12 these operators can solve the axion domain-wall problem, and we describe a simple class of Kim-Shifman-Vainshtein-Zakharov axion models where this occurs. We also study the astrophysics and cosmology of ''heavy axions'' in models where 5≤d≤10
Scaling limit for the Derezi\\'nski-G\\'erard model
OHKUBO, Atsushi
2010-01-01
We consider a scaling limit for the Derezi\\'nski-G\\'erard model. We derive an effective potential by taking a scaling limit for the total Hamiltonian of the Derezi\\'nski-G\\'erard model. Our method to derive an effective potential is independent of whether or not the quantum field has a nonnegative mass. As an application of our theory developed in the present paper, we derive an effective potential of the Nelson model.
BLEVE overpressure: multi-scale comparison of blast wave modeling
International Nuclear Information System (INIS)
Laboureur, D.; Buchlin, J.M.; Rambaud, P.; Heymes, F.; Lapebie, E.
2014-01-01
BLEVE overpressure modeling has been already widely studied but only few validations including the scale effect have been made. After a short overview of the main models available in literature, a comparison is done with different scales of measurements, taken from previous studies or coming from experiments performed in the frame of this research project. A discussion on the best model to use in different cases is finally proposed. (authors)
Dynamically Scaled Model Experiment of a Mooring Cable
Directory of Open Access Journals (Sweden)
Lars Bergdahl
2016-01-01
Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.
Analysis of chromosome aberration data by hybrid-scale models
International Nuclear Information System (INIS)
Indrawati, Iwiq; Kumazawa, Shigeru
2000-02-01
This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)
Flavor gauge models below the Fermi scale
Babu, K. S.; Friedland, A.; Machado, P. A. N.; Mocioiu, I.
2017-12-01
The mass and weak interaction eigenstates for the quarks of the third generation are very well aligned, an empirical fact for which the Standard Model offers no explanation. We explore the possibility that this alignment is due to an additional gauge symmetry in the third generation. Specifically, we construct and analyze an explicit, renormalizable model with a gauge boson, X, corresponding to the B - L symmetry of the third family. Having a relatively light (in the MeV to multi-GeV range), flavor-nonuniversal gauge boson results in a variety of constraints from different sources. By systematically analyzing 20 different constraints, we identify the most sensitive probes: kaon, B +, D + and Upsilon decays, D-{\\overline{D}}^0 mixing, atomic parity violation, and neutrino scattering and oscillations. For the new gauge coupling g X in the range (10-2-10-4) the model is shown to be consistent with the data. Possible ways of testing the model in b physics, top and Z decays, direct collider production and neutrino oscillation experiments, where one can observe nonstandard matter effects, are outlined. The choice of leptons to carry the new force is ambiguous, resulting in additional phenomenological implications, such as non-universality in semileptonic bottom decays. The proposed framework provides interesting connections between neutrino oscillations, flavor and collider physics.
[Unfolding item response model using best-worst scaling].
Ikehara, Kazuya
2015-02-01
In attitude measurement and sensory tests, the unfolding model is typically used. In this model, response probability is formulated by the distance between the person and the stimulus. In this study, we proposed an unfolding item response model using best-worst scaling (BWU model), in which a person chooses the best and worst stimulus among repeatedly presented subsets of stimuli. We also formulated an unfolding model using best scaling (BU model), and compared the accuracy of estimates between the BU and BWU models. A simulation experiment showed that the BWU modell performed much better than the BU model in terms of bias and root mean square errors of estimates. With reference to Usami (2011), the proposed models were apllied to actual data to measure attitudes toward tardiness. Results indicated high similarity between stimuli estimates generated with the proposed models and those of Usami (2011).
Sizing and scaling requirements of a large-scale physical model for code validation
International Nuclear Information System (INIS)
Khaleel, R.; Legore, T.
1990-01-01
Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated
Pelamis wave energy converter. Verification of full-scale control using a 7th scale model
Energy Technology Data Exchange (ETDEWEB)
NONE
2005-07-01
The Pelamis Wave Energy Converter is a new concept for converting wave energy for several applications including generation of electric power. The machine is flexibly moored and swings to meet the water waves head-on. The system is semi-submerged and consists of cylindrical sections linked by hinges. The mechanical operation is described in outline. A one-seventh scale model was built and tested and the outcome was sufficiently successful to warrant the building of a full-scale prototype. In addition, a one-twentieth scale model was built and has contributed much to the research programme. The work is supported financially by the DTI.
Atomic-scale modeling of cellulose nanocrystals
Wu, Xiawa
Cellulose nanocrystals (CNCs), the most abundant nanomaterials in nature, are recognized as one of the most promising candidates to meet the growing demand of green, bio-degradable and sustainable nanomaterials for future applications. CNCs draw significant interest due to their high axial elasticity and low density-elasticity ratio, both of which are extensively researched over the years. In spite of the great potential of CNCs as functional nanoparticles for nanocomposite materials, a fundamental understanding of CNC properties and their role in composite property enhancement is not available. In this work, CNCs are studied using molecular dynamics simulation method to predict their material' behaviors in the nanoscale. (a) Mechanical properties include tensile deformation in the elastic and plastic regions using molecular mechanics, molecular dynamics and nanoindentation methods. This allows comparisons between the methods and closer connectivity to experimental measurement techniques. The elastic moduli in the axial and transverse directions are obtained and the results are found to be in good agreement with previous research. The ultimate properties in plastic deformation are reported for the first time and failure mechanism are analyzed in details. (b) The thermal expansion of CNC crystals and films are studied. It is proposed that CNC film thermal expansion is due primarily to single crystal expansion and CNC-CNC interfacial motion. The relative contributions of inter- and intra-crystal responses to heating are explored. (c) Friction at cellulose-CNCs and diamond-CNCs interfaces is studied. The effects of sliding velocity, normal load, and relative angle between sliding surfaces are predicted. The Cellulose-CNC model is analyzed in terms of hydrogen bonding effect, and the diamond-CNC model compliments some of the discussion of the previous model. In summary, CNC's material properties and molecular models are both studied in this research, contributing to
Sensitivities in global scale modeling of isoprene
Directory of Open Access Journals (Sweden)
R. von Kuhlmann
2004-01-01
Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9 Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.
Scaling of musculoskeletal models from static and dynamic trials
DEFF Research Database (Denmark)
Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark
2015-01-01
Subject-specific scaling of cadaver-based musculoskeletal models is important for accurate musculoskeletal analysis within multiple areas such as ergonomics, orthopaedics and occupational health. We present two procedures to scale ‘generic’ musculoskeletal models to match segment lengths and joint...... three scaling methods to an inverse dynamics-based musculoskeletal model and compared predicted knee joint contact forces to those measured with an instrumented prosthesis during gait. Additionally, a Monte Carlo study was used to investigate the sensitivity of the knee joint contact force to random...
MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS
Energy Technology Data Exchange (ETDEWEB)
Y.S. Wu
2005-08-24
This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on
MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM) MODELS
International Nuclear Information System (INIS)
Y.S. Wu
2005-01-01
This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on water and gas
Anomalous scaling in an age-dependent branching model
Keller-Schmidt, Stephanie; Tugrul, Murat; Eguiluz, Victor M.; Hernandez-Garcia, Emilio; Klemm, Konstantin
2010-01-01
We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age $\\tau$ as $\\tau^{-\\alpha}$. Depending on the exponent $\\alpha$, the scaling of tree depth with tree size $n$ displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition ($\\alpha=1$) tree depth grows as $(\\log n)^2$. This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus p...
Logarithmic corrections to scaling in the XY2-model
International Nuclear Information System (INIS)
Kenna, R.; Irving, A.C.
1995-01-01
We study the distribution of partition function zeroes for the XY-model in two dimensions. In particular we find the scaling behaviour of the end of the distribution of zeroes in the complex external magnetic field plane in the thermodynamic limit (the Yang-Lee edge) and the form for the density of these zeroes. Assuming that finite-size scaling holds, we show that there have to exist logarithmic corrections to the leading scaling behaviour of thermodynamic quantities in this model. These logarithmic corrections are also manifest in the finite-size scaling formulae and we identify them numerically. The method presented here can be used to check the compatibility of scaling behaviour of odd and even thermodynamic functions in other models too. ((orig.))
a Model Study of Small-Scale World Map Generalization
Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.
2018-04-01
With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.
Witte, M.; Morrison, H.; Jensen, J. B.; Bansemer, A.; Gettelman, A.
2017-12-01
The spatial covariance of cloud and rain water (or in simpler terms, small and large drops, respectively) is an important quantity for accurate prediction of the accretion rate in bulk microphysical parameterizations that account for subgrid variability using assumed probability density functions (pdfs). Past diagnoses of this covariance from remote sensing, in situ measurements and large eddy simulation output have implicitly assumed that the magnitude of the covariance is insensitive to grain size (i.e. horizontal resolution) and averaging length, but this is not the case because both cloud and rain water exhibit scale invariance across a wide range of scales - from tens of centimeters to tens of kilometers in the case of cloud water, a range that we will show is primarily limited by instrumentation and sampling issues. Since the individual variances systematically vary as a function of spatial scale, it should be expected that the covariance follows a similar relationship. In this study, we quantify the scaling properties of cloud and rain water content and their covariability from high frequency in situ aircraft measurements of marine stratocumulus taken over the southeastern Pacific Ocean aboard the NSF/NCAR C-130 during the VOCALS-REx field experiment of October-November 2008. First we confirm that cloud and rain water scale in distinct manners, indicating that there is a statistically and potentially physically significant difference in the spatial structure of the two fields. Next, we demonstrate that the covariance is a strong function of spatial scale, which implies important caveats regarding the ability of limited-area models with domains smaller than a few tens of kilometers across to accurately reproduce the spatial organization of precipitation. Finally, we present preliminary work on the development of a scale-aware parameterization of cloud-rain water subgrid covariability based in multifractal analysis intended for application in large-scale model
Reference Priors for the General Location-Scale Model
Fernández, C.; Steel, M.F.J.
1997-01-01
The reference prior algorithm (Berger and Bernardo 1992) is applied to multivariate location-scale models with any regular sampling density, where we establish the irrelevance of the usual assumption of Normal sampling if our interest is in either the location or the scale. This result immediately
Penalized Estimation in Large-Scale Generalized Linear Array Models
DEFF Research Database (Denmark)
Lund, Adam; Vincent, Martin; Hansen, Niels Richard
2017-01-01
Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...
Tang, Shuaiqi
Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing
Atomic scale simulations for improved CRUD and fuel performance modeling
Energy Technology Data Exchange (ETDEWEB)
Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-01-06
A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.
Genome-scale modeling for metabolic engineering.
Simeonidis, Evangelos; Price, Nathan D
2015-03-01
We focus on the application of constraint-based methodologies and, more specifically, flux balance analysis in the field of metabolic engineering, and enumerate recent developments and successes of the field. We also review computational frameworks that have been developed with the express purpose of automatically selecting optimal gene deletions for achieving improved production of a chemical of interest. The application of flux balance analysis methods in rational metabolic engineering requires a metabolic network reconstruction and a corresponding in silico metabolic model for the microorganism in question. For this reason, we additionally present a brief overview of automated reconstruction techniques. Finally, we emphasize the importance of integrating metabolic networks with regulatory information-an area which we expect will become increasingly important for metabolic engineering-and present recent developments in the field of metabolic and regulatory integration.
Genome-scale biological models for industrial microbial systems.
Xu, Nan; Ye, Chao; Liu, Liming
2018-04-01
The primary aims and challenges associated with microbial fermentation include achieving faster cell growth, higher productivity, and more robust production processes. Genome-scale biological models, predicting the formation of an interaction among genetic materials, enzymes, and metabolites, constitute a systematic and comprehensive platform to analyze and optimize the microbial growth and production of biological products. Genome-scale biological models can help optimize microbial growth-associated traits by simulating biomass formation, predicting growth rates, and identifying the requirements for cell growth. With regard to microbial product biosynthesis, genome-scale biological models can be used to design product biosynthetic pathways, accelerate production efficiency, and reduce metabolic side effects, leading to improved production performance. The present review discusses the development of microbial genome-scale biological models since their emergence and emphasizes their pertinent application in improving industrial microbial fermentation of biological products.
Particles and scaling for lattice fields and Ising models
International Nuclear Information System (INIS)
Glimm, J.; Jaffe, A.
1976-01-01
The conjectured inequality GAMMA 6 4 -fields and the scaling limit for d-dimensional Ising models. Assuming GAMMA 6 = 6 these phi 4 fields are free fields unless the field strength renormalization Z -1 diverges. (orig./BJ) [de
Multi-scale modeling strategies in materials science—The ...
Indian Academy of Sciences (India)
Unknown
Multi-scale models; quasicontinuum method; finite elements. 1. Introduction ... boundary with external stresses, and the interaction of a lattice dislocation with a grain ..... mum value of se over the elements that touch node α. The acceleration of ...
Nonpointlike-parton model with asymptotic scaling and with scaling violationat moderate Q2 values
International Nuclear Information System (INIS)
Chen, C.K.
1981-01-01
A nonpointlike-parton model is formulated on the basis of the assumption of energy-independent total cross sections of partons and the current-algebra sum rules. No specific strong-interaction Lagrangian density is introduced in this approach. This model predicts asymptotic scaling for the inelastic structure functions of nucleons on the one hand and scaling violation at moderate Q 2 values on the other hand. The predicted scaling-violation patterns at moderate Q 2 values are consistent with the observed scaling-violation patterns. A numerical fit of F 2 functions is performed in order to demonstrate that the predicted scaling-violation patterns of this model at moderate Q 2 values fit the data, and to see how the predicted asymptotic scaling behavior sets in at various x values. Explicit analytic forms of F 2 functions are obtained from this numerical fit, and are compared in detail with the analytic forms of F 2 functions obtained from the numerical fit of the quantum-chromodynamics (QCD) parton model. This comparison shows that this nonpointlike-parton model fits the data better than the QCD parton model, especially at large and small x values. Nachtman moments are computed from the F 2 functions of this model and are shown to agree with data well. It is also shown that the two-dimensional plot of the logarithm of a nonsinglet moment versus the logarithm of another such moment is not a good way to distinguish this nonpointlike-parton model from the QCD parton model
Multi-scale modeling for sustainable chemical production
DEFF Research Database (Denmark)
Zhuang, Kai; Bakshi, Bhavik R.; Herrgard, Markus
2013-01-01
associated with the development and implementation of a su stainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow......With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes...... models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process...
Calibration of the Site-Scale Saturated Zone Flow Model
International Nuclear Information System (INIS)
Zyvoloski, G. A.
2001-01-01
The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)
3-3-1 models at electroweak scale
International Nuclear Information System (INIS)
Dias, Alex G.; Montero, J.C.; Pleitez, V.
2006-01-01
We show that in 3-3-1 models there exist a natural relation among the SU(3) L coupling constant g, the electroweak mixing angle θ W , the mass of the W, and one of the vacuum expectation values, which implies that those models can be realized at low energy scales and, in particular, even at the electroweak scale. So that, being that symmetries realized in Nature, new physics may be really just around the corner
International Nuclear Information System (INIS)
Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour
2007-01-01
Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will
Energy Technology Data Exchange (ETDEWEB)
Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour
2007-04-19
Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will
Pradhan, Aniruddhe; Akhavan, Rayhaneh
2017-11-01
Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.
[Modeling continuous scaling of NDVI based on fractal theory].
Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng
2013-07-01
Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.
SCALING ANALYSIS OF REPOSITORY HEAT LOAD FOR REDUCED DIMENSIONALITY MODELS
International Nuclear Information System (INIS)
MICHAEL T. ITAMUA AND CLIFFORD K. HO
1998-01-01
The thermal energy released from the waste packages emplaced in the potential Yucca Mountain repository is expected to result in changes in the repository temperature, relative humidity, air mass fraction, gas flow rates, and other parameters that are important input into the models used to calculate the performance of the engineered system components. In particular, the waste package degradation models require input from thermal-hydrologic models that have higher resolution than those currently used to simulate the T/H responses at the mountain-scale. Therefore, a combination of mountain- and drift-scale T/H models is being used to generate the drift thermal-hydrologic environment
Scaling, soil moisture and evapotranspiration in runoff models
Wood, Eric F.
1993-01-01
The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in the land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, the probability distribution for evaporation is derived which illustrates the conditions for which scaling should work. A correction algorithm that may appropriate for the land parameterization of a GCM is derived using a 2nd order linearization scheme. The performance of the algorithm is evaluated.
Variational Multi-Scale method with spectral approximation of the sub-scales.
Dia, Ben Mansour; Chá con-Rebollo, Tomas
2015-01-01
A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base
Properties of Brownian Image Models in Scale-Space
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup
2003-01-01
Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...
Nucleon electric dipole moments in high-scale supersymmetric models
International Nuclear Information System (INIS)
Hisano, Junji; Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi
2015-01-01
The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.
Nucleon electric dipole moments in high-scale supersymmetric models
Energy Technology Data Exchange (ETDEWEB)
Hisano, Junji [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI),Nagoya University,Nagoya 464-8602 (Japan); Department of Physics, Nagoya University,Nagoya 464-8602 (Japan); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8584 (Japan); Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi [Department of Physics, Nagoya University,Nagoya 464-8602 (Japan)
2015-11-12
The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.
New phenomena in the standard no-scale supergravity model
Kelley, S; Nanopoulos, Dimitri V; Zichichi, Antonino; Kelley, S; Lopez, J L; Nanopoulos, D V; Zichichi, A
1994-01-01
We revisit the no-scale mechanism in the context of the simplest no-scale supergravity extension of the Standard Model. This model has the usual five-dimensional parameter space plus an additional parameter \\xi_{3/2}\\equiv m_{3/2}/m_{1/2}. We show how predictions of the model may be extracted over the whole parameter space. A necessary condition for the potential to be stable is {\\rm Str}{\\cal M}^4>0, which is satisfied if \\bf m_{3/2}\\lsim2 m_{\\tilde q}. Order of magnitude calculations reveal a no-lose theorem guaranteeing interesting and potentially observable new phenomena in the neutral scalar sector of the theory which would constitute a ``smoking gun'' of the no-scale mechanism. This new phenomenology is model-independent and divides into three scenarios, depending on the ratio of the weak scale to the vev at the minimum of the no-scale direction. We also calculate the residual vacuum energy at the unification scale (C_0\\, m^4_{3/2}), and find that in typical models one must require C_0>10. Such constrai...
Toward micro-scale spatial modeling of gentrification
O'Sullivan, David
A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.
Qin, Yi; Lin, Yanluan; Xu, Shiming; Ma, Hsi-Yen; Xie, Shaocheng
2018-02-01
Low clouds strongly impact the radiation budget of the climate system, but their simulation in most GCMs has remained a challenge, especially over the subtropical stratocumulus region. Assuming a Gaussian distribution for the subgrid-scale total water and liquid water potential temperature, a new statistical cloud scheme is proposed and tested in NCAR Community Atmospheric Model version 5 (CAM5). The subgrid-scale variance is diagnosed from the turbulent and shallow convective processes in CAM5. The approach is able to maintain the consistency between cloud fraction and cloud condensate and thus alleviates the adjustment needed in the default relative humidity-based cloud fraction scheme. Short-term forecast simulations indicate that low cloud fraction and liquid water content, including their diurnal cycle, are improved due to a proper consideration of subgrid-scale variance over the southeastern Pacific Ocean region. Compared with the default cloud scheme, the new approach produced the mean climate reasonably well with improved shortwave cloud forcing (SWCF) due to more reasonable low cloud fraction and liquid water path over regions with predominant low clouds. Meanwhile, the SWCF bias over the tropical land regions is also alleviated. Furthermore, the simulated marine boundary layer clouds with the new approach extend further offshore and agree better with observations. The new approach is able to obtain the top of atmosphere (TOA) radiation balance with a slightly alleviated double ITCZ problem in preliminary coupled simulations. This study implies that a close coupling of cloud processes with other subgrid-scale physical processes is a promising approach to improve cloud simulations.
Lovejoy, S.; del Rio Amador, L.; Hébert, R.
2015-09-01
On scales of ≈ 10 days (the lifetime of planetary-scale structures), there is a drastic transition from high-frequency weather to low-frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; thus, in GCM (general circulation model) macroweather forecasts, the weather is a high-frequency noise. However, neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developed that use empirical data to force the statistics and climate to be realistic so that even a two-parameter model can perform as well as GCMs for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the large stochastic memories that we quantify. Since macroweather temporal (but not spatial) intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the ScaLIng Macroweather Model (SLIMM). SLIMM is based on a stochastic ordinary differential equation, differing from usual linear stochastic models (such as the linear inverse modelling - LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes that there is no low-frequency memory, SLIMM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner, notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful stochastic forecasts of natural macroweather variability is to first remove the low-frequency anthropogenic component. A previous attempt to use fGn for forecasts had disappointing results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent
Lovejoy, S.; del Rio Amador, L.; Hébert, R.
2015-03-01
At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare
Description of Muzzle Blast by Modified Ideal Scaling Models
Directory of Open Access Journals (Sweden)
Kevin S. Fansler
1998-01-01
Full Text Available Gun blast data from a large variety of weapons are scaled and presented for both the instantaneous energy release and the constant energy deposition rate models. For both ideal explosion models, similar amounts of data scatter occur for the peak overpressure but the instantaneous energy release model correlated the impulse data significantly better, particularly for the region in front of the gun. Two parameters that characterize gun blast are used in conjunction with the ideal scaling models to improve the data correlation. The gun-emptying parameter works particularly well with the instantaneous energy release model to improve data correlation. In particular, the impulse, especially in the forward direction of the gun, is correlated significantly better using the instantaneous energy release model coupled with the use of the gun-emptying parameter. The use of the Mach disc location parameter improves the correlation only marginally. A predictive model is obtained from the modified instantaneous energy release correlation.
Modelling of evapotranspiration at field and landscape scales. Abstract
DEFF Research Database (Denmark)
Overgaard, Jesper; Butts, M.B.; Rosbjerg, Dan
2002-01-01
observations from a nearby weather station. Detailed land-use and soil maps were used to set up the model. Leaf area index was derived from NDVI (Normalized Difference Vegetation Index) images. To validate the model at field scale the simulated evapotranspiration rates were compared to eddy...
Role of scaling in the statistical modelling of finance
Indian Academy of Sciences (India)
Modelling the evolution of a financial index as a stochastic process is a problem awaiting a full, satisfactory solution since it was first formulated by Bachelier in 1900. Here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model.
Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries
DEFF Research Database (Denmark)
Prunescu, Remus Mihail
with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...
Appropriatie spatial scales to achieve model output uncertainty goals
Booij, Martijn J.; Melching, Charles S.; Chen, Xiaohong; Chen, Yongqin; Xia, Jun; Zhang, Hailun
2008-01-01
Appropriate spatial scales of hydrological variables were determined using an existing methodology based on a balance in uncertainties from model inputs and parameters extended with a criterion based on a maximum model output uncertainty. The original methodology uses different relationships between
Development of the Artistic Supervision Model Scale (ASMS)
Kapusuzoglu, Saduman; Dilekci, Umit
2017-01-01
The purpose of the study is to develop the Artistic Supervision Model Scale in accordance with the perception of inspectors and the elementary and secondary school teachers on artistic supervision. The lack of a measuring instrument related to the model of artistic supervision in the field of literature reveals the necessity of such study. 290…
Accounting for small scale heterogeneity in ecohydrologic watershed models
Burke, W.; Tague, C.
2017-12-01
Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach
Transdisciplinary application of the cross-scale resilience model
Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.
2014-01-01
The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.
Scale-free, axisymmetry galaxy models with little angular momentum
International Nuclear Information System (INIS)
Richstone, D.O.
1980-01-01
Two scale-free models of elliptical galaxies are constructed using a self-consistent field approach developed by Schwarschild. Both models have concentric, oblate spheroidal, equipotential surfaces, with a logarithmic potential dependence on central distance. The axial ratio of the equipotential surfaces is 4:3, and the extent ratio of density level surfaces id 2.5:1 (corresponding to an E6 galaxy). Each model satisfies the Poisson and steady state Boltzmann equaion for time scales of order 100 galactic years
Drift-Scale Coupled Processes (DST and THC Seepage) Models
International Nuclear Information System (INIS)
Sonnenthale, E.
2001-01-01
The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [1534471]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: Performance Assessment (PA); Near-Field Environment (NFE) PMR; Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); and UZ Flow and Transport Process Model Report (PMR). The work scope for this activity is presented in the TWPs cited above, and summarized as follows: Continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are
Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales
Directory of Open Access Journals (Sweden)
Yonghe Zhang
2010-11-01
Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.
Discontinuous Galerkin Subgrid Finite Element Method for Heterogeneous Brinkman’s Equations
Iliev, Oleg P.; Lazarov, Raytcho D.; Willems, Joerg
2010-01-01
We present a two-scale finite element method for solving Brinkman's equations with piece-wise constant coefficients. This system of equations model fluid flows in highly porous, heterogeneous media with complex topology of the heterogeneities. We
Drift-Scale Coupled Processes (DST and THC Seepage) Models
International Nuclear Information System (INIS)
Dixon, P.
2004-01-01
The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Report is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC
Model Scaling of Hydrokinetic Ocean Renewable Energy Systems
von Ellenrieder, Karl; Valentine, William
2013-11-01
Numerical simulations are performed to validate a non-dimensional dynamic scaling procedure that can be applied to subsurface and deeply moored systems, such as hydrokinetic ocean renewable energy devices. The prototype systems are moored in water 400 m deep and include: subsurface spherical buoys moored in a shear current and excited by waves; an ocean current turbine excited by waves; and a deeply submerged spherical buoy in a shear current excited by strong current fluctuations. The corresponding model systems, which are scaled based on relative water depths of 10 m and 40 m, are also studied. For each case examined, the response of the model system closely matches the scaled response of the corresponding full-sized prototype system. The results suggest that laboratory-scale testing of complete ocean current renewable energy systems moored in a current is possible. This work was supported by the U.S. Southeast National Marine Renewable Energy Center (SNMREC).
Scale modeling of reinforced concrete structures subjected to seismic loading
International Nuclear Information System (INIS)
Dove, R.C.
1983-01-01
Reinforced concrete, Category I structures are so large that the possibility of seismicly testing the prototype structures under controlled conditions is essentially nonexistent. However, experimental data, from which important structural properties can be determined and existing and new methods of seismic analysis benchmarked, are badly needed. As a result, seismic experiments on scaled models are of considerable interest. In this paper, the scaling laws are developed in some detail so that assumptions and choices based on judgement can be clearly recognized and their effects discussed. The scaling laws developed are then used to design a reinforced concrete model of a Category I structure. Finally, how scaling is effected by various types of damping (viscous, structural, and Coulomb) is discussed
Empirical spatial econometric modelling of small scale neighbourhood
Gerkman, Linda
2012-07-01
The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.
Quantum critical scaling of fidelity in BCS-like model
International Nuclear Information System (INIS)
Adamski, Mariusz; Jedrzejewski, Janusz; Krokhmalskii, Taras
2013-01-01
We study scaling of the ground-state fidelity in neighborhoods of quantum critical points in a model of interacting spinful fermions—a BCS-like model. Due to the exact diagonalizability of the model, in one and higher dimensions, scaling of the ground-state fidelity can be analyzed numerically with great accuracy, not only for small systems but also for macroscopic ones, together with the crossover region between them. Additionally, in the one-dimensional case we have been able to derive a number of analytical formulas for fidelity and show that they accurately fit our numerical results; these results are reported in the paper. Besides regular critical points and their neighborhoods, where well-known scaling laws are obeyed, there is the multicritical point and critical points in its proximity where anomalous scaling behavior is found. We also consider scaling of fidelity in neighborhoods of critical points where fidelity oscillates strongly as the system size or the chemical potential is varied. Our results for a one-dimensional version of a BCS-like model are compared with those obtained recently by Rams and Damski in similar studies of a quantum spin chain—an anisotropic XY model in a transverse magnetic field. (paper)
Energy Technology Data Exchange (ETDEWEB)
Kubo, Jisuke [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Yamada, Masatoshi [Department of Physics, Kyoto University,Kyoto 606-8502 (Japan); Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, 69120 Heidelberg (Germany)
2016-12-01
We assume that the origin of the electroweak (EW) scale is a gauge-invariant scalar-bilinear condensation in a strongly interacting non-abelian gauge sector, which is connected to the standard model via a Higgs portal coupling. The dynamical scale genesis appears as a phase transition at finite temperature, and it can produce a gravitational wave (GW) background in the early Universe. We find that the critical temperature of the scale phase transition lies above that of the EW phase transition and below few O(100) GeV and it is strongly first-order. We calculate the spectrum of the GW background and find the scale phase transition is strong enough that the GW background can be observed by DECIGO.
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
Anomalous scaling in an age-dependent branching model.
Keller-Schmidt, Stephanie; Tuğrul, Murat; Eguíluz, Víctor M; Hernández-García, Emilio; Klemm, Konstantin
2015-02-01
We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age τ as τ(-α). Depending on the exponent α, the scaling of tree depth with tree size n displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition (α=1) tree depth grows as (logn)(2). This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus providing a theoretical support for age-dependent speciation and associating it to the occurrence of a critical point.
Macro-scale turbulence modelling for flows in porous media
International Nuclear Information System (INIS)
Pinson, F.
2006-03-01
- This work deals with the macroscopic modeling of turbulence in porous media. It concerns heat exchangers, nuclear reactors as well as urban flows, etc. The objective of this study is to describe in an homogenized way, by the mean of a spatial average operator, turbulent flows in a solid matrix. In addition to this first operator, the use of a statistical average operator permits to handle the pseudo-aleatory character of turbulence. The successive application of both operators allows us to derive the balance equations of the kind of flows under study. Two major issues are then highlighted, the modeling of dispersion induced by the solid matrix and the turbulence modeling at a macroscopic scale (Reynolds tensor and turbulent dispersion). To this aim, we lean on the local modeling of turbulence and more precisely on the k - ε RANS models. The methodology of dispersion study, derived thanks to the volume averaging theory, is extended to turbulent flows. Its application includes the simulation, at a microscopic scale, of turbulent flows within a representative elementary volume of the porous media. Applied to channel flows, this analysis shows that even within the turbulent regime, dispersion remains one of the dominating phenomena within the macro-scale modeling framework. A two-scale analysis of the flow allows us to understand the dominating role of the drag force in the kinetic energy transfers between scales. Transfers between the mean part and the turbulent part of the flow are formally derived. This description significantly improves our understanding of the issue of macroscopic modeling of turbulence and leads us to define the sub-filter production and the wake dissipation. A f - f - w >f model is derived. It is based on three balance equations for the turbulent kinetic energy, the viscous dissipation and the wake dissipation. Furthermore, a dynamical predictor for the friction coefficient is proposed. This model is then successfully applied to the study of
Cloud-In-Cell modeling of shocked particle-laden flows at a ``SPARSE'' cost
Taverniers, Soren; Jacobs, Gustaaf; Sen, Oishik; Udaykumar, H. S.
2017-11-01
A common tool for enabling process-scale simulations of shocked particle-laden flows is Eulerian-Lagrangian Particle-Source-In-Cell (PSIC) modeling where each particle is traced in its Lagrangian frame and treated as a mathematical point. Its dynamics are governed by Stokes drag corrected for high Reynolds and Mach numbers. The computational burden is often reduced further through a ``Cloud-In-Cell'' (CIC) approach which amalgamates groups of physical particles into computational ``macro-particles''. CIC does not account for subgrid particle fluctuations, leading to erroneous predictions of cloud dynamics. A Subgrid Particle-Averaged Reynolds-Stress Equivalent (SPARSE) model is proposed that incorporates subgrid interphase velocity and temperature perturbations. A bivariate Gaussian source distribution, whose covariance captures the cloud's deformation to first order, accounts for the particles' momentum and energy influence on the carrier gas. SPARSE is validated by conducting tests on the interaction of a particle cloud with the accelerated flow behind a shock. The cloud's average dynamics and its deformation over time predicted with SPARSE converge to their counterparts computed with reference PSIC models as the number of Gaussians is increased from 1 to 16. This work was supported by AFOSR Grant No. FA9550-16-1-0008.
Zhou, X.; Beljaars, A.; Wang, Y.; Huang, B.; Lin, C.; Chen, Y.; Wu, H.
2017-09-01
Weather Research and Forecasting (WRF) simulations with different selections of subgrid orographic drag over the Tibetan Plateau have been evaluated with observation and ERA-Interim reanalysis. Results show that the subgrid orographic drag schemes, especially the turbulent orographic form drag (TOFD) scheme, efficiently reduce the 10 m wind speed bias and RMS error with respect to station measurements. With the combination of gravity wave, flow blocking and TOFD schemes, wind speed is simulated more realistically than with the individual schemes only. Improvements are also seen in the 2 m air temperature and surface pressure. The gravity wave drag, flow blocking drag, and TOFD schemes combined have the smallest station mean bias (-2.05°C in 2 m air temperature and 1.27 hPa in surface pressure) and RMS error (3.59°C in 2 m air temperature and 2.37 hPa in surface pressure). Meanwhile, the TOFD scheme contributes more to the improvements than the gravity wave drag and flow blocking schemes. The improvements are more pronounced at low levels of the atmosphere than at high levels due to the stronger drag enhancement on the low-level flow. The reduced near-surface cold bias and high-pressure bias over the Tibetan Plateau are the result of changes in the low-level wind components associated with the geostrophic balance. The enhanced drag directly leads to weakened westerlies but also enhances the a-geostrophic flow in this case reducing (enhancing) the northerlies (southerlies), which bring more warm air across the Himalaya Mountain ranges from South Asia (bring less cold air from the north) to the interior Tibetan Plateau.
Multi Scale Models for Flexure Deformation in Sheet Metal Forming
Directory of Open Access Journals (Sweden)
Di Pasquale Edmondo
2016-01-01
Full Text Available This paper presents the application of multi scale techniques to the simulation of sheet metal forming using the one-step method. When a blank flows over the die radius, it undergoes a complex cycle of bending and unbending. First, we describe an original model for the prediction of residual plastic deformation and stresses in the blank section. This model, working on a scale about one hundred times smaller than the element size, has been implemented in SIMEX, one-step sheet metal forming simulation code. The utilisation of this multi-scale modeling technique improves greatly the accuracy of the solution. Finally, we discuss the implications of this analysis on the prediction of springback in metal forming.
Scaling of Core Material in Rubble Mound Breakwater Model Tests
DEFF Research Database (Denmark)
Burcharth, H. F.; Liu, Z.; Troch, P.
1999-01-01
The permeability of the core material influences armour stability, wave run-up and wave overtopping. The main problem related to the scaling of core materials in models is that the hydraulic gradient and the pore velocity are varying in space and time. This makes it impossible to arrive at a fully...... correct scaling. The paper presents an empirical formula for the estimation of the wave induced pressure gradient in the core, based on measurements in models and a prototype. The formula, together with the Forchheimer equation can be used for the estimation of pore velocities in cores. The paper proposes...... that the diameter of the core material in models is chosen in such a way that the Froude scale law holds for a characteristic pore velocity. The characteristic pore velocity is chosen as the average velocity of a most critical area in the core with respect to porous flow. Finally the method is demonstrated...
Validity of the Neuromuscular Recovery Scale: a measurement model approach.
Velozo, Craig; Moorhouse, Michael; Ardolino, Elizabeth; Lorenz, Doug; Suter, Sarah; Basso, D Michele; Behrman, Andrea L
2015-08-01
To determine how well the Neuromuscular Recovery Scale (NRS) items fit the Rasch, 1-parameter, partial-credit measurement model. Confirmatory factor analysis (CFA) and principal components analysis (PCA) of residuals were used to determine dimensionality. The Rasch, 1-parameter, partial-credit rating scale model was used to determine rating scale structure, person/item fit, point-measure item correlations, item discrimination, and measurement precision. Seven NeuroRecovery Network clinical sites. Outpatients (N=188) with spinal cord injury. Not applicable. NRS. While the NRS met 1 of 3 CFA criteria, the PCA revealed that the Rasch measurement dimension explained 76.9% of the variance. Ten of 11 items and 91% of the patients fit the Rasch model, with 9 of 11 items showing high discrimination. Sixty-nine percent of the ratings met criteria. The items showed a logical item-difficulty order, with Stand retraining as the easiest item and Walking as the most challenging item. The NRS showed no ceiling or floor effects and separated the sample into almost 5 statistically distinct strata; individuals with an American Spinal Injury Association Impairment Scale (AIS) D classification showed the most ability, and those with an AIS A classification showed the least ability. Items not meeting the rating scale criteria appear to be related to the low frequency counts. The NRS met many of the Rasch model criteria for construct validity. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
A Network Contention Model for the Extreme-scale Simulator
Energy Technology Data Exchange (ETDEWEB)
Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL
2015-01-01
The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.
Use of genome-scale microbial models for metabolic engineering
DEFF Research Database (Denmark)
Patil, Kiran Raosaheb; Åkesson, M.; Nielsen, Jens
2004-01-01
Metabolic engineering serves as an integrated approach to design new cell factories by providing rational design procedures and valuable mathematical and experimental tools. Mathematical models have an important role for phenotypic analysis, but can also be used for the design of optimal metaboli...... network structures. The major challenge for metabolic engineering in the post-genomic era is to broaden its design methodologies to incorporate genome-scale biological data. Genome-scale stoichiometric models of microorganisms represent a first step in this direction....
Wind Farm Wake Models From Full Scale Data
DEFF Research Database (Denmark)
Knudsen, Torben; Bak, Thomas
2012-01-01
This investigation is part of the EU FP7 project “Distributed Control of Large-Scale Offshore Wind Farms”. The overall goal in this project is to develop wind farm controllers giving power set points to individual turbines in the farm in order to minimise mechanical loads and optimise power. One...... on real full scale data. The modelling is based on so called effective wind speed. It is shown that there is a wake for a wind direction range of up to 20 degrees. Further, when accounting for the wind direction it is shown that the two model structures considered can both fit the experimental data...
Ground-water solute transport modeling using a three-dimensional scaled model
International Nuclear Information System (INIS)
Crider, S.S.
1987-01-01
Scaled models are used extensively in current hydraulic research on sediment transport and solute dispersion in free surface flows (rivers, estuaries), but are neglected in current ground-water model research. Thus, an investigation was conducted to test the efficacy of a three-dimensional scaled model of solute transport in ground water. No previous results from such a model have been reported. Experiments performed on uniform scaled models indicated that some historical problems (e.g., construction and scaling difficulties; disproportionate capillary rise in model) were partly overcome by using simple model materials (sand, cement and water), by restricting model application to selective classes of problems, and by physically controlling the effect of the model capillary zone. Results from these tests were compared with mathematical models. Model scaling laws were derived for ground-water solute transport and used to build a three-dimensional scaled model of a ground-water tritium plume in a prototype aquifer on the Savannah River Plant near Aiken, South Carolina. Model results compared favorably with field data and with a numerical model. Scaled models are recommended as a useful additional tool for prediction of ground-water solute transport
Grell, G. A.; Freitas, S. R.; Olson, J.; Bela, M.
2017-12-01
We will start by providing a summary of the latest cumulus parameterization modeling efforts at NOAA's Earth System Research Laboratory (ESRL) will be presented on both regional and global scales. The physics package includes a scale-aware parameterization of subgrid cloudiness feedback to radiation (coupled PBL, microphysics, radiation, shallow and congestus type convection), the stochastic Grell-Freitas (GF) scale- and aerosol-aware convective parameterization, and an aerosol aware microphysics package. GF is based on a stochastic approach originally implemented by Grell and Devenyi (2002) and described in more detail in Grell and Freitas (2014, ACP). It was expanded to include PDF's for vertical mass flux, as well as modifications to improve the diurnal cycle. This physics package will be used on different scales, spanning global to cloud resolving, to look at the impact on scalar transport and numerical weather prediction.
Atomic scale modelling of materials of the nuclear fuel cycle
International Nuclear Information System (INIS)
Bertolus, M.
2011-10-01
This document written to obtain the French accreditation to supervise research presents the research I conducted at CEA Cadarache since 1999 on the atomic scale modelling of non-metallic materials involved in the nuclear fuel cycle: host materials for radionuclides from nuclear waste (apatites), fuel (in particular uranium dioxide) and ceramic cladding materials (silicon carbide). These are complex materials at the frontier of modelling capabilities since they contain heavy elements (rare earths or actinides), exhibit complex structures or chemical compositions and/or are subjected to irradiation effects: creation of point defects and fission products, amorphization. The objective of my studies is to bring further insight into the physics and chemistry of the elementary processes involved using atomic scale modelling and its coupling with higher scale models and experimental studies. This work is organised in two parts: on the one hand the development, adaptation and implementation of atomic scale modelling methods and validation of the approximations used; on the other hand the application of these methods to the investigation of nuclear materials under irradiation. This document contains a synthesis of the studies performed, orientations for future research, a detailed resume and a list of publications and communications. (author)
Spatiotemporal exploratory models for broad-scale survey data.
Fink, Daniel; Hochachka, Wesley M; Zuckerberg, Benjamin; Winkler, David W; Shaby, Ben; Munson, M Arthur; Hooker, Giles; Riedewald, Mirek; Sheldon, Daniel; Kelling, Steve
2010-12-01
The distributions of animal populations change and evolve through time. Migratory species exploit different habitats at different times of the year. Biotic and abiotic features that determine where a species lives vary due to natural and anthropogenic factors. This spatiotemporal variation needs to be accounted for in any modeling of species' distributions. In this paper we introduce a semiparametric model that provides a flexible framework for analyzing dynamic patterns of species occurrence and abundance from broad-scale survey data. The spatiotemporal exploratory model (STEM) adds essential spatiotemporal structure to existing techniques for developing species distribution models through a simple parametric structure without requiring a detailed understanding of the underlying dynamic processes. STEMs use a multi-scale strategy to differentiate between local and global-scale spatiotemporal structure. A user-specified species distribution model accounts for spatial and temporal patterning at the local level. These local patterns are then allowed to "scale up" via ensemble averaging to larger scales. This makes STEMs especially well suited for exploring distributional dynamics arising from a variety of processes. Using data from eBird, an online citizen science bird-monitoring project, we demonstrate that monthly changes in distribution of a migratory species, the Tree Swallow (Tachycineta bicolor), can be more accurately described with a STEM than a conventional bagged decision tree model in which spatiotemporal structure has not been imposed. We also demonstrate that there is no loss of model predictive power when a STEM is used to describe a spatiotemporal distribution with very little spatiotemporal variation; the distribution of a nonmigratory species, the Northern Cardinal (Cardinalis cardinalis).
Scaling and percolation in the small-world network model
Energy Technology Data Exchange (ETDEWEB)
Newman, M. E. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States); Watts, D. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States)
1999-12-01
In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society.
Scaling and percolation in the small-world network model
International Nuclear Information System (INIS)
Newman, M. E. J.; Watts, D. J.
1999-01-01
In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society
Multilevel method for modeling large-scale networks.
Energy Technology Data Exchange (ETDEWEB)
Safro, I. M. (Mathematics and Computer Science)
2012-02-24
Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
2015-07-06
Geophysical Union, Fall Meeting, San Francisco, CA Li Q, Bou-Zeid E, Anderson W, Grimmond S, 2014: Proc. of American Physical Society, Division of Fluid...the existence of hairpin packets (and “ cane ” structures: inclined coherent par- cel with only one leg of the hairpin[41] around the low momentum...Department. Conference Proceedings : • Anderson W, Li Q, Bou-Zeid E, 2014: Proc. of American Geophysical Union, Fall Meeting, San Francisco, CA. • Anderson
ISS modeling strategy for the numerical simulation of turbulent sub-channel liquid-vapor flows
International Nuclear Information System (INIS)
Olivier Lebaigue; Benoit Mathieu; Didier Jamet
2005-01-01
Full text of publication follows: The general objective is to perform numerical simulation of the liquid-vapor turbulent two-phase flows that occur in sub-channels of a nuclear plant assembly under nominal or incidental situations. Additional features concern nucleate boiling at the surface of fuel rods and the sliding of vapor bubbles on this surface with possible dynamic contact lines. The Interfaces and Sub-grid Scales (ISS) modeling strategy for numerical simulations is one of the possible two-phase equivalents for the one-phase LES concept. It consists in solving the two-phase flows features at the scales that are resolved by the grid of the numerical method, and to take into account the unresolved scales with sub-grid models. Interfaces are tracked in a DNS-like approach while specific features of the behavior of interfaces such as contact line physics, coalescence and fragmentation, and the smallest scales of turbulence within each phase have an unresolved scale part that is modeled. The problem of the modeling of the smallest scales of turbulence is rather simple even if the classical situation is altered by the presence of the interfaces. In a typical sub-channel situation (e.g., 15 MPa and 3.5 m.s -1 water flow in a PWR sub-channel), the Kolmogorov scale is ca. 1 μm whereas typical bubble size are supposed to be close to 150 μm. Therefore, the use of a simple sub-grid model between, e.g., 1 and 20 μm allows a drastic reduction of the number of nodes in the space discretization while it remains possible to validate by comparison to true DNS results. Other sub-grid models have been considered to recover physical phenomena that cannot be captured with a realistic discretization: they rely on physical scales from molecular size to 1 μm. In these cases, the use of sub-grid model is no longer a matter of CPU-time and memory saving only, but also a corner stone to recover physical behavior. From this point of view at least we are no longer performing true
Two-dimensional divertor modeling and scaling laws
International Nuclear Information System (INIS)
Catto, P.J.; Connor, J.W.; Knoll, D.A.
1996-01-01
Two-dimensional numerical models of divertors contain large numbers of dimensionless parameters that must be varied to investigate all operating regimes of interest. To simplify the task and gain insight into divertor operation, we employ similarity techniques to investigate whether model systems of equations plus boundary conditions in the steady state admit scaling transformations that lead to useful divertor similarity scaling laws. A short mean free path neutral-plasma model of the divertor region below the x-point is adopted in which all perpendicular transport is due to the neutrals. We illustrate how the results can be used to benchmark large computer simulations by employing a modified version of UEDGE which contains a neutral fluid model. (orig.)
Active Learning of Classification Models with Likert-Scale Feedback.
Xue, Yanbing; Hauskrecht, Milos
2017-01-01
Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.
Multi-scale Modeling of Plasticity in Tantalum.
Energy Technology Data Exchange (ETDEWEB)
Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weinberger, Christopher [Drexel Univ., Philadelphia, PA (United States)
2015-12-01
In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct
International Nuclear Information System (INIS)
Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B
2013-01-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)
Optogenetic stimulation of a meso-scale human cortical model
Selvaraj, Prashanth; Szeri, Andrew; Sleigh, Jamie; Kirsch, Heidi
2015-03-01
Neurological phenomena like sleep and seizures depend not only on the activity of individual neurons, but on the dynamics of neuron populations as well. Meso-scale models of cortical activity provide a means to study neural dynamics at the level of neuron populations. Additionally, they offer a safe and economical way to test the effects and efficacy of stimulation techniques on the dynamics of the cortex. Here, we use a physiologically relevant meso-scale model of the cortex to study the hypersynchronous activity of neuron populations during epileptic seizures. The model consists of a set of stochastic, highly non-linear partial differential equations. Next, we use optogenetic stimulation to control seizures in a hyperexcited cortex, and to induce seizures in a normally functioning cortex. The high spatial and temporal resolution this method offers makes a strong case for the use of optogenetics in treating meso scale cortical disorders such as epileptic seizures. We use bifurcation analysis to investigate the effect of optogenetic stimulation in the meso scale model, and its efficacy in suppressing the non-linear dynamics of seizures.
Small-Scale Helicopter Automatic Autorotation : Modeling, Guidance, and Control
Taamallah, S.
2015-01-01
Our research objective consists in developing a, model-based, automatic safety recovery system, for a small-scale helicopter Unmanned Aerial Vehicle (UAV) in autorotation, i.e. an engine OFF flight condition, that safely flies and lands the helicopter to a pre-specified ground location. In pursuit
Phenomenological aspects of no-scale inflation models
Energy Technology Data Exchange (ETDEWEB)
Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics,King’s College London,WC2R 2LS London (United Kingdom); Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Garcia, Marcos A.G. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics andAstronomy, Texas A& M University,College Station, 77843 Texas (United States); Astroparticle Physics Group, Houston Advanced Research Center (HARC), Mitchell Campus, Woodlands, 77381 Texas (United States); Academy of Athens, Division of Natural Sciences, 28 Panepistimiou Avenue, 10679 Athens (Greece); Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States)
2015-10-01
We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0}=B{sub 0}=A{sub 0}=0, of the CMSSM type with universal A{sub 0} and m{sub 0}≠0 at a high scale, and of the mSUGRA type with A{sub 0}=B{sub 0}+m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2}≠0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.
Modeling and simulation in tribology across scales: An overview
DEFF Research Database (Denmark)
Vakis, A.I.; Yastrebov, V.A.; Scheibert, J.
2018-01-01
theories at the nano- and micro-scales, as well as multiscale and multiphysics aspects for analytical and computational models relevant to applications spanning a variety of sectors, from automotive to biotribology and nanotechnology. Significant effort is still required to account for complementary...
Large scale solar district heating. Evaluation, modelling and designing - Appendices
Energy Technology Data Exchange (ETDEWEB)
Heller, A.
2000-07-01
The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)
Vegetable parenting practices scale: Item response modeling analyses
Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...
Scale-invariant inclusive spectra in a dual model
International Nuclear Information System (INIS)
Chikovani, Z.E.; Jenkovsky, L.L.; Martynov, E.S.
1979-01-01
One-particle inclusive distributions at large transverse momentum phisub(tr) are shown to scale, Edσ/d 3 phi approximately phisub(tr)sup(-N)(1-Xsub(tr))sup(1+N/2)lnphisub(tr), in a dual model with Mandelstam analyticity if the Regge trajectories are logarithmic asymptotically
Learning in an estimated medium-scale DSGE model
Czech Academy of Sciences Publication Activity Database
Slobodyan, Sergey; Wouters, R.
2012-01-01
Roč. 36, č. 1 (2012), s. 26-46 ISSN 0165-1889 R&D Projects: GA ČR(CZ) GCP402/11/J018 Institutional support: PRVOUK-P23 Keywords : constant-gain adaptive learning * medium-scale DSGE model * DSGE- VAR Subject RIV: AH - Economics Impact factor: 0.807, year: 2012
Directory of Open Access Journals (Sweden)
Laura Casas
Full Text Available The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude × nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype, those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here. We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dose-dependent effect probably due to a concerted action of multiple pathways involved in scale formation.
Casas, Laura; Szűcs, Ré ka; Vij, Shubha; Goh, Chin Heng; Kathiresan, Purushothaman; Né meth, Sá ndor; Jeney, Zsigmond; Bercsé nyi, Mikló s; Orbá n, Lá szló
2013-01-01
The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.
Casas, Laura
2013-12-30
The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the \\'S\\' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called \\'N\\' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov\\'s work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.
Including investment risk in large-scale power market models
DEFF Research Database (Denmark)
Lemming, Jørgen Kjærgaard; Meibom, P.
2003-01-01
Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...
Application of physical scaling towards downscaling climate model precipitation data
Gaur, Abhishek; Simonovic, Slobodan P.
2018-04-01
Physical scaling (SP) method downscales climate model data to local or regional scales taking into consideration physical characteristics of the area under analysis. In this study, multiple SP method based models are tested for their effectiveness towards downscaling North American regional reanalysis (NARR) daily precipitation data. Model performance is compared with two state-of-the-art downscaling methods: statistical downscaling model (SDSM) and generalized linear modeling (GLM). The downscaled precipitation is evaluated with reference to recorded precipitation at 57 gauging stations located within the study region. The spatial and temporal robustness of the downscaling methods is evaluated using seven precipitation based indices. Results indicate that SP method-based models perform best in downscaling precipitation followed by GLM, followed by the SDSM model. Best performing models are thereafter used to downscale future precipitations made by three global circulation models (GCMs) following two emission scenarios: representative concentration pathway (RCP) 2.6 and RCP 8.5 over the twenty-first century. The downscaled future precipitation projections indicate an increase in mean and maximum precipitation intensity as well as a decrease in the total number of dry days. Further an increase in the frequency of short (1-day), moderately long (2-4 day), and long (more than 5-day) precipitation events is projected.
Modelling Planck-scale Lorentz violation via analogue models
International Nuclear Information System (INIS)
Weinfurtner, Silke; Liberati, Stefano; Visser, Matt
2006-01-01
Astrophysical tests of Planck-suppressed Lorentz violations had been extensively studied in recent years and very stringent constraints have been obtained within the framework of effective field theory. There are however still some unresolved theoretical issues, in particular regarding the so called 'naturalness problem' - which arises when postulating that Planck suppressed Lorentz violations arise only from operators with mass dimension greater than four in the Lagrangian. In the work presented here we shall try to address this problem by looking at a condensed-matter analogue of the Lorentz violations considered in quantum gravity phenomenology. specifically, we investigate the class of two-component BECs subject to laserinduced transitions between the two components, and we show that this model is an example for Lorentz invariance violation due to ultraviolet physics. We shall show that such a model can be considered to be an explicit example high-energy Lorentz violations where the 'naturalness problem' does not arise
Towards scale-independent land-surface flux estimates in Noah-MP
Thober, Stephan; Mizukami, Naoki; Samaniego, Luis; Attinger, Sabine; Clark, Martyn; Cuntz, Matthias
2017-04-01
Land-surface models use a variety of process representations to calculate terrestrial energy, water and biogeochemical fluxes. These process descriptions are usually derived from point measurements which are, in turn, scaled to much larger resolutions ranging from 1 km in catchment hydrology to 100 km in climate modelling. Both, hydrologic and climate models are nowadays run on different spatial resolutions, using the exactly same land surface representations. A fundamental criterion for the physical consistency of land-surface simulations across scales is that a flux estimated over a given area is independent of the spatial model resolution (i.e., the flux-matching criterion). The Noah-MP land surface model considers only one soil and land cover type per model grid cell without any representation of their subgrid variability, implying a weak flux-matching. A fractional approach simulates the subgrid variability but it requires a higher computational demand than using effective parameters and it is used only for land cover in current land surface schemes. A promising approach to derive scale-independent parameters is the Multiscale Parameter Regionalization (MPR) technique, which consists of two steps: first, it applies transfer functions directly to high-resolution data (such as 100 m soil maps) to derive high-resolution model parameter fields, acknowledging the full subgrid variability. Second, it upscales these high-resolution parameter fields to the model resolution by using appropriate upscaling operators. MPR has shown to improve substantially the scalability of the mesoscale Hydrologic Models mHM (Samaniego et al., 2010 WRR). Here, we apply the MPR technique to the Noah-MP land-surface model for a large sample of basins distributed across the contiguous USA. Specifically, we evaluate the flux-matching criterion for several hydrologic fluxes such as evapotranspiration and drainage at scales ranging from 3 km to 48 km. We investigate the impact of different
Chacó n Rebollo, Tomá s; Dia, Ben Mansour
2015-01-01
This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.
Chacón Rebollo, Tomás
2015-03-01
This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.
Modeling and Simulation Techniques for Large-Scale Communications Modeling
National Research Council Canada - National Science Library
Webb, Steve
1997-01-01
.... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.
Drift-Scale Coupled Processes (DST and THC Seepage) Models
Energy Technology Data Exchange (ETDEWEB)
E. Gonnenthal; N. Spyoher
2001-02-05
The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [153447]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: (1) Performance Assessment (PA); (2) Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); (3) UZ Flow and Transport Process Model Report (PMR); and (4) Near-Field Environment (NFE) PMR. The work scope for this activity is presented in the TWPs cited above, and summarized as follows: continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data
New Models and Methods for the Electroweak Scale
Energy Technology Data Exchange (ETDEWEB)
Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics
2017-09-26
This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac
Chemical theory and modelling through density across length scales
International Nuclear Information System (INIS)
Ghosh, Swapan K.
2016-01-01
One of the concepts that has played a major role in the conceptual as well as computational developments covering all the length scales of interest in a number of areas of chemistry, physics, chemical engineering and materials science is the concept of single-particle density. Density functional theory has been a versatile tool for the description of many-particle systems across length scales. Thus, in the microscopic length scale, an electron density based description has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids. Density concept has been used in the form of single particle number density in the intermediate mesoscopic length scale to obtain an appropriate picture of the equilibrium and dynamical processes, dealing with a wide class of problems involving interfacial science and soft condensed matter. In the macroscopic length scale, however, matter is usually treated as a continuous medium and a description using local mass density, energy density and other related property density functions has been found to be quite appropriate. The basic ideas underlying the versatile uses of the concept of density in the theory and modelling of materials and phenomena, as visualized across length scales, along with selected illustrative applications to some recent areas of research on hydrogen energy, soft matter, nucleation phenomena, isotope separation, and separation of mixture in condensed phase, will form the subject matter of the talk. (author)
Extending SME to Handle Large-Scale Cognitive Modeling.
Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre
2017-07-01
Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.
The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...
Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales
International Nuclear Information System (INIS)
Krstic, Predrag S.
2014-01-01
Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.
Modeling fast and slow earthquakes at various scales.
Ide, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.
Testing of materials and scale models for impact limiters
International Nuclear Information System (INIS)
Maji, A.K.; Satpathi, D.; Schryer, H.L.
1991-01-01
Aluminum Honeycomb and Polyurethane foam specimens were tested to obtain experimental data on the material's behavior under different loading conditions. This paper reports the dynamic tests conducted on the materials and on the design and testing of scale models made out of these open-quotes Impact Limiters,close quotes as they are used in the design of transportation casks. Dynamic tests were conducted on a modified Charpy Impact machine with associated instrumentation, and compared with static test results. A scale model testing setup was designed and used for preliminary tests on models being used by current designers of transportation casks. The paper presents preliminary results of the program. Additional information will be available and reported at the time of presentation of the paper
Coalescing colony model: Mean-field, scaling, and geometry
Carra, Giulia; Mallick, Kirone; Barthelemy, Marc
2017-12-01
We analyze the coalescing model where a `primary' colony grows and randomly emits secondary colonies that spread and eventually coalesce with it. This model describes population proliferation in theoretical ecology, tumor growth, and is also of great interest for modeling urban sprawl. Assuming the primary colony to be always circular of radius r (t ) and the emission rate proportional to r (t) θ , where θ >0 , we derive the mean-field equations governing the dynamics of the primary colony, calculate the scaling exponents versus θ , and compare our results with numerical simulations. We then critically test the validity of the circular approximation for the colony shape and show that it is sound for a constant emission rate (θ =0 ). However, when the emission rate is proportional to the perimeter, the circular approximation breaks down and the roughness of the primary colony cannot be discarded, thus modifying the scaling exponents.
Lepton Dipole Moments in Supersymmetric Low-Scale Seesaw Models
Ilakovac, Amon; Popov, Luka
2014-01-01
We study the anomalous magnetic and electric dipole moments of charged leptons in supersymmetric low-scale seesaw models with right-handed neutrino superfields. We consider a minimally extended framework of minimal supergravity, by assuming that CP violation originates from complex soft SUSY-breaking bilinear and trilinear couplings associated with the right-handed sneutrino sector. We present numerical estimates of the muon anomalous magnetic moment and the electron electric dipole moment (EDM), as functions of key model parameters, such as the Majorana mass scale mN and tan(\\beta). In particular, we find that the contributions of the singlet heavy neutrinos and sneutrinos to the electron EDM are naturally small in this model, of order 10^{-27} - 10^{-28} e cm, and can be probed in the present and future experiments.
Multiresolution comparison of precipitation datasets for large-scale models
Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.
2014-12-01
Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.
Utilization of Large Scale Surface Models for Detailed Visibility Analyses
Caha, J.; Kačmařík, M.
2017-11-01
This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.
Multi-scale climate modelling over Southern Africa using a variable-resolution global model
CSIR Research Space (South Africa)
Engelbrecht, FA
2011-12-01
Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...
Performance prediction of industrial centrifuges using scale-down models.
Boychyn, M; Yim, S S S; Bulmer, M; More, J; Bracewell, D G; Hoare, M
2004-12-01
Computational fluid dynamics was used to model the high flow forces found in the feed zone of a multichamber-bowl centrifuge and reproduce these in a small, high-speed rotating disc device. Linking the device to scale-down centrifugation, permitted good estimation of the performance of various continuous-flow centrifuges (disc stack, multichamber bowl, CARR Powerfuge) for shear-sensitive protein precipitates. Critically, the ultra scale-down centrifugation process proved to be a much more accurate predictor of production multichamber-bowl performance than was the pilot centrifuge.
Design and Modelling of Small Scale Low Temperature Power Cycles
DEFF Research Database (Denmark)
Wronski, Jorrit
he work presented in this report contributes to the state of the art within design and modelling of small scale low temperature power cycles. The study is divided into three main parts: (i) fluid property evaluation, (ii) expansion device investigations and (iii) heat exchanger performance......-oriented Modelica code and was included in the thermo Cycle framework for small scale ORC systems. Special attention was paid to the valve system and a control method for variable expansion ratios was introduced based on a cogeneration scenario. Admission control based on evaporator and condenser conditions...
Matrix models, Argyres-Douglas singularities and double scaling limits
International Nuclear Information System (INIS)
Bertoldi, Gaetano
2003-01-01
We construct an N = 1 theory with gauge group U(nN) and degree n+1 tree level superpotential whose matrix model spectral curve develops an Argyres-Douglas singularity. The calculation of the tension of domain walls in the U(nN) theory shows that the standard large-N expansion breaks down at the Argyres-Douglas points, with tension that scales as a fractional power of N. Nevertheless, it is possible to define appropriate double scaling limits which are conjectured to yield the tension of 2-branes in the resulting N = 1 four dimensional non-critical string theories as proposed by Ferrari. (author)
Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale
Sobolev, S. V.; Muldashev, I. A.
2015-12-01
Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the
Y-Scaling in a simple quark model
International Nuclear Information System (INIS)
Kumano, S.; Moniz, E.J.
1988-01-01
A simple quark model is used to define a nuclear pair model, that is, two composite hadrons interacting only through quark interchange and bound in an overall potential. An ''equivalent'' hadron model is developed, displaying an effective hadron-hadron interaction which is strongly repulsive. We compare the effective hadron model results with the exact quark model observables in the kinematic region of large momentum transfer, small energy transfer. The nucleon reponse function in this y-scaling region is, within the traditional frame work sensitive to the nucleon momentum distribution at large momentum. We find a surprizingly small effect of hadron substructure. Furthermore, we find in our model that a simple parametrization of modified hadron size in the bound state, motivated by the bound quark momentum distribution, is not a useful way to correlate different observables
Site-scale groundwater flow modelling of Beberg
Energy Technology Data Exchange (ETDEWEB)
Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden); Walker, D. [Duke Engineering and Services (United States); Hartley, L. [AEA Technology, Harwell (United Kingdom)
1999-08-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of {epsilon}{sub f} 10{sup -4}, and a flow-wetted surface of a{sub r} = 1.0 m{sup 2}/(m{sup 3} rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10{sup -3} m/year. The median F-ratio is 5.6 x 10{sup 5} year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates
Hydrological Modelling of Small Scale Processes in a Wetland Habitat
DEFF Research Database (Denmark)
Johansen, Ole; Jensen, Jacob Birk; Pedersen, Morten Lauge
2009-01-01
Numerical modelling of the hydrology in a Danish rich fen area has been conducted. By collecting various data in the field the model has been successfully calibrated and the flow paths as well as the groundwater discharge distribution have been simulated in details. The results of this work have...... shown that distributed numerical models can be applied to local scale problems and that natural springs, ditches, the geological conditions as well as the local topographic variations have a significant influence on the flow paths in the examined rich fen area....
Site-scale groundwater flow modelling of Beberg
International Nuclear Information System (INIS)
Gylling, B.; Walker, D.; Hartley, L.
1999-08-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of ε f 10 -4 , and a flow-wetted surface of a r = 1.0 m 2 /(m 3 rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10 -3 m/year. The median F-ratio is 5.6 x 10 5 year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates that the change in hydraulic gradient
A No-Scale Inflationary Model to Fit Them All
Ellis, John; Nanopoulos, Dimitri; Olive, Keith
2014-01-01
The magnitude of B-mode polarization in the cosmic microwave background as measured by BICEP2 favours models of chaotic inflation with a quadratic $m^2 \\phi^2/2$ potential, whereas data from the Planck satellite favour a small value of the tensor-to-scalar perturbation ratio $r$ that is highly consistent with the Starobinsky $R + R^2$ model. Reality may lie somewhere between these two scenarios. In this paper we propose a minimal two-field no-scale supergravity model that interpolates between quadratic and Starobinsky-like inflation as limiting cases, while retaining the successful prediction $n_s \\simeq 0.96$.
Directory of Open Access Journals (Sweden)
I. Gouttevin
2012-04-01
Full Text Available Soil freezing is a major feature of boreal regions with substantial impact on climate. The present paper describes the implementation of the thermal and hydrological effects of soil freezing in the land surface model ORCHIDEE, which includes a physical description of continental hydrology. The new soil freezing scheme is evaluated against analytical solutions and in-situ observations at a variety of scales in order to test its numerical robustness, explore its sensitivity to parameterization choices and confront its performance to field measurements at typical application scales.
Our soil freezing model exhibits a low sensitivity to the vertical discretization for spatial steps in the range of a few millimetres to a few centimetres. It is however sensitive to the temperature interval around the freezing point where phase change occurs, which should be 1 °C to 2 °C wide. Furthermore, linear and thermodynamical parameterizations of the liquid water content lead to similar results in terms of water redistribution within the soil and thermal evolution under freezing. Our approach does not allow firm discrimination of the performance of one approach over the other.
The new soil freezing scheme considerably improves the representation of runoff and river discharge in regions underlain by permafrost or subject to seasonal freezing. A thermodynamical parameterization of the liquid water content appears more appropriate for an integrated description of the hydrological processes at the scale of the vast Siberian basins. The use of a subgrid variability approach and the representation of wetlands could help capture the features of the Arctic hydrological regime with more accuracy.
The modeling of the soil thermal regime is generally improved by the representation of soil freezing processes. In particular, the dynamics of the active layer is captured with more accuracy, which is of crucial importance in the prospect of
Hessel, R.; Tenge, A.J.M.
2008-01-01
To reduce soil erosion, soil and water conservation (SWC) methods are often used. However, no method exists to model beforehand how implementing such measures will affect erosion at catchment scale. A method was developed to simulate the effects of SWC measures with catchment scale erosion models.
Scaling analysis and model estimation of solar corona index
Ray, Samujjwal; Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik
2018-04-01
A monthly average solar green coronal index time series for the period from January 1939 to December 2008 collected from NOAA (The National Oceanic and Atmospheric Administration) has been analysed in this paper in perspective of scaling analysis and modelling. Smoothed and de-noising have been done using suitable mother wavelet as a pre-requisite. The Finite Variance Scaling Method (FVSM), Higuchi method, rescaled range (R/S) and a generalized method have been applied to calculate the scaling exponents and fractal dimensions of the time series. Autocorrelation function (ACF) is used to find autoregressive (AR) process and Partial autocorrelation function (PACF) has been used to get the order of AR model. Finally a best fit model has been proposed using Yule-Walker Method with supporting results of goodness of fit and wavelet spectrum. The results reveal an anti-persistent, Short Range Dependent (SRD), self-similar property with signatures of non-causality, non-stationarity and nonlinearity in the data series. The model shows the best fit to the data under observation.
A high-resolution global-scale groundwater model
de Graaf, I. E. M.; Sutanudjaja, E. H.; van Beek, L. P. H.; Bierkens, M. F. P.
2015-02-01
Groundwater is the world's largest accessible source of fresh water. It plays a vital role in satisfying basic needs for drinking water, agriculture and industrial activities. During times of drought groundwater sustains baseflow to rivers and wetlands, thereby supporting ecosystems. Most global-scale hydrological models (GHMs) do not include a groundwater flow component, mainly due to lack of geohydrological data at the global scale. For the simulation of lateral flow and groundwater head dynamics, a realistic physical representation of the groundwater system is needed, especially for GHMs that run at finer resolutions. In this study we present a global-scale groundwater model (run at 6' resolution) using MODFLOW to construct an equilibrium water table at its natural state as the result of long-term climatic forcing. The used aquifer schematization and properties are based on available global data sets of lithology and transmissivities combined with the estimated thickness of an upper, unconfined aquifer. This model is forced with outputs from the land-surface PCRaster Global Water Balance (PCR-GLOBWB) model, specifically net recharge and surface water levels. A sensitivity analysis, in which the model was run with various parameter settings, showed that variation in saturated conductivity has the largest impact on the groundwater levels simulated. Validation with observed groundwater heads showed that groundwater heads are reasonably well simulated for many regions of the world, especially for sediment basins (R2 = 0.95). The simulated regional-scale groundwater patterns and flow paths demonstrate the relevance of lateral groundwater flow in GHMs. Inter-basin groundwater flows can be a significant part of a basin's water budget and help to sustain river baseflows, especially during droughts. Also, water availability of larger aquifer systems can be positively affected by additional recharge from inter-basin groundwater flows.
Using Scaling to Understand, Model and Predict Global Scale Anthropogenic and Natural Climate Change
Lovejoy, S.; del Rio Amador, L.
2014-12-01
The atmosphere is variable over twenty orders of magnitude in time (≈10-3 to 1017 s) and almost all of the variance is in the spectral "background" which we show can be divided into five scaling regimes: weather, macroweather, climate, macroclimate and megaclimate. We illustrate this with instrumental and paleo data. Based the signs of the fluctuation exponent H, we argue that while the weather is "what you get" (H>0: fluctuations increasing with scale), that it is macroweather (Hdecreasing with scale) - not climate - "that you expect". The conventional framework that treats the background as close to white noise and focuses on quasi-periodic variability assumes a spectrum that is in error by a factor of a quadrillion (≈ 1015). Using this scaling framework, we can quantify the natural variability, distinguish it from anthropogenic variability, test various statistical hypotheses and make stochastic climate forecasts. For example, we estimate the probability that the warming is simply a giant century long natural fluctuation is less than 1%, most likely less than 0.1% and estimate return periods for natural warming events of different strengths and durations, including the slow down ("pause") in the warming since 1998. The return period for the pause was found to be 20-50 years i.e. not very unusual; however it immediately follows a 6 year "pre-pause" warming event of almost the same magnitude with a similar return period (30 - 40 years). To improve on these unconditional estimates, we can use scaling models to exploit the long range memory of the climate process to make accurate stochastic forecasts of the climate including the pause. We illustrate stochastic forecasts on monthly and annual scale series of global and northern hemisphere surface temperatures. We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow: for example, using hindcasts we find that at 10 year forecast horizons we can still explain ≈ 15% of the
Modeling field scale unsaturated flow and transport processes
International Nuclear Information System (INIS)
Gelhar, L.W.; Celia, M.A.; McLaughlin, D.
1994-08-01
The scales of concern in subsurface transport of contaminants from low-level radioactive waste disposal facilities are in the range of 1 to 1,000 m. Natural geologic materials generally show very substantial spatial variability in hydraulic properties over this range of scales. Such heterogeneity can significantly influence the migration of contaminants. It is also envisioned that complex earth structures will be constructed to isolate the waste and minimize infiltration of water into the facility. The flow of water and gases through such facilities must also be a concern. A stochastic theory describing unsaturated flow and contamination transport in naturally heterogeneous soils has been enhanced by adopting a more realistic characterization of soil variability. The enhanced theory is used to predict field-scale effective properties and variances of tension and moisture content. Applications illustrate the important effects of small-scale heterogeneity on large-scale anisotropy and hysteresis and demonstrate the feasibility of simulating two-dimensional flow systems at time and space scales of interest in radioactive waste disposal investigations. Numerical algorithms for predicting field scale unsaturated flow and contaminant transport have been improved by requiring them to respect fundamental physical principles such as mass conservation. These algorithms are able to provide realistic simulations of systems with very dry initial conditions and high degrees of heterogeneity. Numerical simulation of the movement of water and air in unsaturated soils has demonstrated the importance of air pathways for contaminant transport. The stochastic flow and transport theory has been used to develop a systematic approach to performance assessment and site characterization. Hypothesis-testing techniques have been used to determine whether model predictions are consistent with observed data
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
Towards modeling intergranular stress corrosion cracks on grain size scales
International Nuclear Information System (INIS)
Simonovski, Igor; Cizelj, Leon
2012-01-01
Highlights: ► Simulating the onset and propagation of intergranular cracking. ► Model based on the as-measured geometry and crystallographic orientations. ► Feasibility, performance of the proposed computational approach demonstrated. - Abstract: Development of advanced models at the grain size scales has so far been mostly limited to simulated geometry structures such as for example 3D Voronoi tessellations. The difficulty came from a lack of non-destructive techniques for measuring the microstructures. In this work a novel grain-size scale approach for modelling intergranular stress corrosion cracking based on as-measured 3D grain structure of a 400 μm stainless steel wire is presented. Grain topologies and crystallographic orientations are obtained using a diffraction contrast tomography, reconstructed within a detailed finite element model and coupled with advanced constitutive models for grains and grain boundaries. The wire is composed of 362 grains and over 1600 grain boundaries. Grain boundary damage initialization and early development is then explored for a number of cases, ranging from isotropic elasticity up to crystal plasticity constitutive laws for the bulk grain material. In all cases the grain boundaries are modeled using the cohesive zone approach. The feasibility of the approach is explored.
Multi-scale modeling of the CD8 immune response
Energy Technology Data Exchange (ETDEWEB)
Barbarroux, Loic, E-mail: loic.barbarroux@doctorant.ec-lyon.fr [Inria, Université de Lyon, UMR 5208, Institut Camille Jordan (France); Ecole Centrale de Lyon, 36 avenue Guy de Collongue, 69134 Ecully (France); Michel, Philippe, E-mail: philippe.michel@ec-lyon.fr [Inria, Université de Lyon, UMR 5208, Institut Camille Jordan (France); Ecole Centrale de Lyon, 36 avenue Guy de Collongue, 69134 Ecully (France); Adimy, Mostafa, E-mail: mostafa.adimy@inria.fr [Inria, Université de Lyon, UMR 5208, Université Lyon 1, Institut Camille Jordan, 43 Bd. du 11 novembre 1918, F-69200 Villeurbanne Cedex (France); Crauste, Fabien, E-mail: crauste@math.univ-lyon1.fr [Inria, Université de Lyon, UMR 5208, Université Lyon 1, Institut Camille Jordan, 43 Bd. du 11 novembre 1918, F-69200 Villeurbanne Cedex (France)
2016-06-08
During the primary CD8 T-Cell immune response to an intracellular pathogen, CD8 T-Cells undergo exponential proliferation and continuous differentiation, acquiring cytotoxic capabilities to address the infection and memorize the corresponding antigen. After cleaning the organism, the only CD8 T-Cells left are antigen-specific memory cells whose role is to respond stronger and faster in case they are presented this very same antigen again. That is how vaccines work: a small quantity of a weakened pathogen is introduced in the organism to trigger the primary response, generating corresponding memory cells in the process, giving the organism a way to defend himself in case it encounters the same pathogen again. To investigate this process, we propose a non linear, multi-scale mathematical model of the CD8 T-Cells immune response due to vaccination using a maturity structured partial differential equation. At the intracellular scale, the level of expression of key proteins is modeled by a delay differential equation system, which gives the speeds of maturation for each cell. The population of cells is modeled by a maturity structured equation whose speeds are given by the intracellular model. We focus here on building the model, as well as its asymptotic study. Finally, we display numerical simulations showing the model can reproduce the biological dynamics of the cell population for both the primary response and the secondary responses.
Site-scale groundwater flow modelling of Aberg
Energy Technology Data Exchange (ETDEWEB)
Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)
1998-12-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method`s self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the
Site-scale groundwater flow modelling of Aberg
International Nuclear Information System (INIS)
Walker, D.; Gylling, B.
1998-12-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method's self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the
Large scale hydro-economic modelling for policy support
de Roo, Ad; Burek, Peter; Bouraoui, Faycal; Reynaud, Arnaud; Udias, Angel; Pistocchi, Alberto; Lanzanova, Denis; Trichakis, Ioannis; Beck, Hylke; Bernhard, Jeroen
2014-05-01
To support European Union water policy making and policy monitoring, a hydro-economic modelling environment has been developed to assess optimum combinations of water retention measures, water savings measures, and nutrient reduction measures for continental Europe. This modelling environment consists of linking the agricultural CAPRI model, the LUMP land use model, the LISFLOOD water quantity model, the EPIC water quality model, the LISQUAL combined water quantity, quality and hydro-economic model, and a multi-criteria optimisation routine. With this modelling environment, river basin scale simulations are carried out to assess the effects of water-retention measures, water-saving measures, and nutrient-reduction measures on several hydro-chemical indicators, such as the Water Exploitation Index (WEI), Nitrate and Phosphate concentrations in rivers, the 50-year return period river discharge as an indicator for flooding, and economic losses due to water scarcity for the agricultural sector, the manufacturing-industry sector, the energy-production sector and the domestic sector, as well as the economic loss due to flood damage. Recently, this model environment is being extended with a groundwater model to evaluate the effects of measures on the average groundwater table and available resources. Also, water allocation rules are addressed, while having environmental flow included as a minimum requirement for the environment. Economic functions are currently being updated as well. Recent development and examples will be shown and discussed, as well as open challenges.
Modeling and simulation in tribology across scales: An overview
DEFF Research Database (Denmark)
Vakis, A.I.; Yastrebov, V.A.; Scheibert, J.
2018-01-01
This review summarizes recent advances in the area of tribology based on the outcome of a Lorentz Center workshop surveying various physical, chemical and mechanical phenomena across scales. Among the main themes discussed were those of rough surface representations, the breakdown of continuum...... nonlinear effects of plasticity, adhesion, friction, wear, lubrication and surface chemistry in tribological models. For each topic, we propose some research directions....
Phenomenological aspects of no-scale inflation models
Energy Technology Data Exchange (ETDEWEB)
Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics, King' s College London, WC2R 2LS London (United Kingdom); Garcia, Marcos A.G.; Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V., E-mail: john.ellis@cern.ch, E-mail: garciagarcia@physics.umn.edu, E-mail: dimitri@physics.tamu.edu, E-mail: olive@physics.umn.edu [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, 77843 Texas (United States)
2015-10-01
We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0} = B{sub 0} = A{sub 0} = 0, of the CMSSM type with universal A{sub 0} and m{sub 0} ≠ 0 at a high scale, and of the mSUGRA type with A{sub 0} = B{sub 0} + m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2} ≠ 0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.
Perturbation theory instead of large scale shell model calculations
International Nuclear Information System (INIS)
Feldmeier, H.; Mankos, P.
1977-01-01
Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de
Next-generation genome-scale models for metabolic engineering
DEFF Research Database (Denmark)
King, Zachary A.; Lloyd, Colton J.; Feist, Adam M.
2015-01-01
Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict...... examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering....
Scaling theory of depinning in the Sneppen model
International Nuclear Information System (INIS)
Maslov, S.; Paczuski, M.
1994-01-01
We develop a scaling theory for the critical depinning behavior of the Sneppen interface model [Phys. Rev. Lett. 69, 3539 (1992)]. This theory is based on a ''gap'' equation that describes the self-organization process to a critical state of the depinning transition. All of the critical exponents can be expressed in terms of two independent exponents, ν parallel (d) and ν perpendicular (d), characterizing the divergence of the parallel and perpendicular correlation lengths as the interface approaches its dynamical attractor
Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing
Nance, Donald; Liever, Peter; Nielsen, Tanner
2015-01-01
The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.
Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing
Nance, Donald K.; Liever, Peter A.
2015-01-01
The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.
Atmospheric dispersion modelling over complex terrain at small scale
Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.
2014-03-01
Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.
Two-scale modelling for hydro-mechanical damage
International Nuclear Information System (INIS)
Frey, J.; Chambon, R.; Dascalu, C.
2010-01-01
Document available in extended abstract form only. Excavation works for underground storage create a damage zone for the rock nearby and affect its hydraulics properties. This degradation, already observed by laboratory tests, can create a leading path for fluids. The micro fracture phenomenon, which occur at a smaller scale and affect the rock permeability, must be fully understood to minimize the transfer process. Many methods can be used in order to take into account the microstructure of heterogeneous materials. Among them a method has been developed recently. Instead of using a constitutive equation obtained by phenomenological considerations or by some homogenization techniques, the representative elementary volume (R.E.V.) is modelled as a structure and the links between a prescribed kinematics and the corresponding dual forces are deduced numerically. This yields the so called Finite Element square method (FE2). In a numerical point of view, a finite element model is used at the macroscopic level, and for each Gauss point, computations on the microstructure gives the usual results of a constitutive law. This numerical approach is now classical in order to properly model some materials such as composites and the efficiency of such numerical homogenization process has been shown, and allows numerical modelling of deformation processes associated with various micro-structural changes. The aim of this work is to describe trough such a method, damage of the rock with a two scale hydro-mechanical model. The rock damage at the macroscopic scale is directly link with an analysis on the microstructure. At the macroscopic scale a two phase's problem is studied. A solid skeleton is filled up by a filtrating fluid. It is necessary to enforce two balance equation and two mass conservation equations. A classical way to deal with such a problem is to work with the balance equation of the whole mixture, and the mass fluid conservation written in a weak form, the mass
Kuhn, Alexander
2013-12-05
Lagrangian coherent structures (LCSs) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the finite-time Lyapunov exponent field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. In comparison to the standard method, this allows for a more accurate global approximation of LCS on sparse grids and for long integration intervals. The proposed algorithm works directly on a set of given particle trajectories and without additional flow map derivatives. We demonstrate its application for a set of computational fluid dynamic examples, as well as trajectories acquired by Lagrangian methods, and discuss its benefits and limitations. © 2013 The Authors Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.
Model Predictive Control for a Small Scale Unmanned Helicopter
Directory of Open Access Journals (Sweden)
Jianfu Du
2008-11-01
Full Text Available Kinematical and dynamical equations of a small scale unmanned helicoper are presented in the paper. Based on these equations a model predictive control (MPC method is proposed for controlling the helicopter. This novel method allows the direct accounting for the existing time delays which are used to model the dynamics of actuators and aerodynamics of the main rotor. Also the limits of the actuators are taken into the considerations during the controller design. The proposed control algorithm was verified in real flight experiments where good perfomance was shown in postion control mode.
Multi-scale modeling of ductile failure in metallic alloys
International Nuclear Information System (INIS)
Pardoen, Th.; Scheyvaerts, F.; Simar, A.; Tekoglu, C.; Onck, P.R.
2010-01-01
Micro-mechanical models for ductile failure have been developed in the seventies and eighties essentially to address cracking in structural applications and complement the fracture mechanics approach. Later, this approach has become attractive for physical metallurgists interested by the prediction of failure during forming operations and as a guide for the design of more ductile and/or high-toughness microstructures. Nowadays, a realistic treatment of damage evolution in complex metallic microstructures is becoming feasible when sufficiently sophisticated constitutive laws are used within the context of a multilevel modelling strategy. The current understanding and the state of the art models for the nucleation, growth and coalescence of voids are reviewed with a focus on the underlying physics. Considerations are made about the introduction of the different length scales associated with the microstructure and damage process. Two applications of the methodology are then described to illustrate the potential of the current models. The first application concerns the competition between intergranular and transgranular ductile fracture in aluminum alloys involving soft precipitate free zones along the grain boundaries. The second application concerns the modeling of ductile failure in friction stir welded joints, a problem which also involves soft and hard zones, albeit at a larger scale. (authors)
Finite element modeling of multilayered structures of fish scales.
Chandler, Mei Qiang; Allison, Paul G; Rodriguez, Rogie I; Moser, Robert D; Kennedy, Alan J
2014-12-01
The interlinked fish scales of Atractosteus spatula (alligator gar) and Polypterus senegalus (gray and albino bichir) are effective multilayered armor systems for protecting fish from threats such as aggressive conspecific interactions or predation. Both types of fish scales have multi-layered structures with a harder and stiffer outer layer, and softer and more compliant inner layers. However, there are differences in relative layer thickness, property mismatch between layers, the property gradations and nanostructures in each layer. The fracture paths and patterns of both scales under microindentation loads were different. In this work, finite element models of fish scales of A. spatula and P. senegalus were built to investigate the mechanics of their multi-layered structures under penetration loads. The models simulate a rigid microindenter penetrating the fish scales quasi-statically to understand the observed experimental results. Study results indicate that the different fracture patterns and crack paths observed in the experiments were related to the different stress fields caused by the differences in layer thickness, and spatial distribution of the elastic and plastic properties in the layers, and the differences in interface properties. The parametric studies and experimental results suggest that smaller fish such as P. senegalus may have adopted a thinner outer layer for light-weighting and improved mobility, and meanwhile adopted higher strength and higher modulus at the outer layer, and stronger interface properties to prevent ring cracking and interface cracking, and larger fish such as A. spatula and Arapaima gigas have lower strength and lower modulus at the outer layers and weaker interface properties, but have adopted thicker outer layers to provide adequate protection against ring cracking and interface cracking, possibly because weight is less of a concern relative to the smaller fish such as P. senegalus. Published by Elsevier Ltd.
Residence-time framework for modeling multicomponent reactive transport in stream hyporheic zones
Painter, S. L.; Coon, E. T.; Brooks, S. C.
2017-12-01
Process-based models for transport and transformation of nutrients and contaminants in streams require tractable representations of solute exchange between the stream channel and biogeochemically active hyporheic zones. Residence-time based formulations provide an alternative to detailed three-dimensional simulations and have had good success in representing hyporheic exchange of non-reacting solutes. We extend the residence-time formulation for hyporheic transport to accommodate general multicomponent reactive transport. To that end, the integro-differential form of previous residence time models is replaced by an equivalent formulation based on a one-dimensional advection dispersion equation along the channel coupled at each channel location to a one-dimensional transport model in Lagrangian travel-time form. With the channel discretized for numerical solution, the associated Lagrangian model becomes a subgrid model representing an ensemble of streamlines that are diverted into the hyporheic zone before returning to the channel. In contrast to the previous integro-differential forms of the residence-time based models, the hyporheic flowpaths have semi-explicit spatial representation (parameterized by travel time), thus allowing coupling to general biogeochemical models. The approach has been implemented as a stream-corridor subgrid model in the open-source integrated surface/subsurface modeling software ATS. We use bedform-driven flow coupled to a biogeochemical model with explicit microbial biomass dynamics as an example to show that the subgrid representation is able to represent redox zonation in sediments and resulting effects on metal biogeochemical dynamics in a tractable manner that can be scaled to reach scales.
Scaling behavior of an airplane-boarding model
Brics, Martins; Kaupužs, Jevgenijs; Mahnke, Reinhard
2013-04-01
An airplane-boarding model, introduced earlier by Frette and Hemmer [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.85.011130 85, 011130 (2012)], is studied with the aim of determining precisely its asymptotic power-law scaling behavior for a large number of passengers N. Based on Monte Carlo simulation data for very large system sizes up to N=216=65536, we have analyzed numerically the scaling behavior of the mean boarding time and other related quantities. In analogy with critical phenomena, we have used appropriate scaling Ansätze, which include the leading term as some power of N (e.g., ∝Nα for ), as well as power-law corrections to scaling. Our results clearly show that α=1/2 holds with a very high numerical accuracy (α=0.5001±0.0001). This value deviates essentially from α≃0.69, obtained earlier by Frette and Hemmer from data within the range 2≤N≤16. Our results confirm the convergence of the effective exponent αeff(N) to 1/2 at large N as observed by Bernstein. Our analysis explains this effect. Namely, the effective exponent αeff(N) varies from values about 0.7 for small system sizes to the true asymptotic value 1/2 at N→∞ almost linearly in N-1/3 for large N. This means that the variation is caused by corrections to scaling, the leading correction-to-scaling exponent being θ≈1/3. We have estimated also other exponents: ν=1/2 for the mean number of passengers taking seats simultaneously in one time step, β=1 for the second moment of tb, and γ≈1/3 for its variance.
Scaling behavior of an airplane-boarding model.
Brics, Martins; Kaupužs, Jevgenijs; Mahnke, Reinhard
2013-04-01
An airplane-boarding model, introduced earlier by Frette and Hemmer [Phys. Rev. E 85, 011130 (2012)], is studied with the aim of determining precisely its asymptotic power-law scaling behavior for a large number of passengers N. Based on Monte Carlo simulation data for very large system sizes up to N=2(16)=65536, we have analyzed numerically the scaling behavior of the mean boarding time and other related quantities. In analogy with critical phenomena, we have used appropriate scaling Ansätze, which include the leading term as some power of N (e.g., [proportionality]N(α) for ), as well as power-law corrections to scaling. Our results clearly show that α=1/2 holds with a very high numerical accuracy (α=0.5001±0.0001). This value deviates essentially from α=/~0.69, obtained earlier by Frette and Hemmer from data within the range 2≤N≤16. Our results confirm the convergence of the effective exponent α(eff)(N) to 1/2 at large N as observed by Bernstein. Our analysis explains this effect. Namely, the effective exponent α(eff)(N) varies from values about 0.7 for small system sizes to the true asymptotic value 1/2 at N→∞ almost linearly in N(-1/3) for large N. This means that the variation is caused by corrections to scaling, the leading correction-to-scaling exponent being θ≈1/3. We have estimated also other exponents: ν=1/2 for the mean number of passengers taking seats simultaneously in one time step, β=1 for the second moment of t(b), and γ≈1/3 for its variance.
Modeling and Simulation of a lab-scale Fluidised Bed
Directory of Open Access Journals (Sweden)
Britt Halvorsen
2002-04-01
Full Text Available The flow behaviour of a lab-scale fluidised bed with a central jet has been simulated. The study has been performed with an in-house computational fluid dynamics (CFD model named FLOTRACS-MP-3D. The CFD model is based on a multi-fluid Eulerian description of the phases, where the kinetic theory for granular flow forms the basis for turbulence modelling of the solid phases. A two-dimensional Cartesian co-ordinate system is used to describe the geometry. This paper discusses whether bubble formation and bed height are influenced by coefficient of restitution, drag model and number of solid phases. Measurements of the same fluidised bed with a digital video camera are performed. Computational results are compared with the experimental results, and the discrepancies are discussed.
Towards a 'standard model' of large scale structure formation
International Nuclear Information System (INIS)
Shafi, Q.
1994-01-01
We explore constraints on inflationary models employing data on large scale structure mainly from COBE temperature anisotropies and IRAS selected galaxy surveys. In models where the tensor contribution to the COBE signal is negligible, we find that the spectral index of density fluctuations n must exceed 0.7. Furthermore the COBE signal cannot be dominated by the tensor component, implying n > 0.85 in such models. The data favors cold plus hot dark matter models with n equal or close to unity and Ω HDM ∼ 0.2 - 0.35. Realistic grand unified theories, including supersymmetric versions, which produce inflation with these properties are presented. (author). 46 refs, 8 figs
Censored rainfall modelling for estimation of fine-scale extremes
Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro
2018-01-01
Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.
Effects of input uncertainty on cross-scale crop modeling
Waha, Katharina; Huth, Neil; Carberry, Peter
2014-05-01
The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input
Atmospheric Boundary Layer Modeling for Combined Meteorology and Air Quality Systems
Atmospheric Eulerian grid models for mesoscale and larger applications require sub-grid models for turbulent vertical exchange processes, particularly within the Planetary Boundary Layer (PSL). In combined meteorology and air quality modeling systems consistent PSL modeling of wi...
Probabilistic flood damage modelling at the meso-scale
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2014-05-01
Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.
A hybrid plume model for local-scale dispersion
Energy Technology Data Exchange (ETDEWEB)
Nikmo, J.; Tuovinen, J.P.; Kukkonen, J.; Valkama, I.
1997-12-31
The report describes the contribution of the Finnish Meteorological Institute to the project `Dispersion from Strongly Buoyant Sources`, under the `Environment` programme of the European Union. The project addresses the atmospheric dispersion of gases and particles emitted from typical fires in warehouses and chemical stores. In the study only the `passive plume` regime, in which the influence of plume buoyancy is no longer important, is addressed. The mathematical model developed and its numerical testing is discussed. The model is based on atmospheric boundary-layer scaling theory. In the vicinity of the source, Gaussian equations are used in both the horizontal and vertical directions. After a specified transition distance, gradient transfer theory is applied in the vertical direction, while the horizontal dispersion is still assumed to be Gaussian. The dispersion parameters and eddy diffusivity are modelled in a form which facilitates the use of a meteorological pre-processor. Also a new model for the vertical eddy diffusivity (K{sub z}), which is a continuous function of height in the various atmospheric scaling regions is presented. The model includes a treatment of the dry deposition of gases and particulate matter, but wet deposition has been neglected. A numerical solver for the atmospheric diffusion equation (ADE) has been developed. The accuracy of the numerical model was analysed by comparing the model predictions with two analytical solutions of ADE. The numerical deviations of the model predictions from these analytic solutions were less than two per cent for the computational regime. The report gives numerical results for the vertical profiles of the eddy diffusivity and the dispersion parameters, and shows spatial concentration distributions in various atmospheric conditions 39 refs.
Modelling soil erosion at European scale: towards harmonization and reproducibility
Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.
2015-02-01
Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale, because a systematic knowledge of local climatological and soil parameters is often unavailable. A new approach for modelling soil erosion at regional scale is here proposed. It is based on the joint use of low-data-demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available data sets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country-level statistics of pre-existing European soil erosion maps is also provided.
Spatial modeling of agricultural land use change at global scale
Meiyappan, P.; Dalton, M.; O'Neill, B. C.; Jain, A. K.
2014-11-01
Long-term modeling of agricultural land use is central in global scale assessments of climate change, food security, biodiversity, and climate adaptation and mitigation policies. We present a global-scale dynamic land use allocation model and show that it can reproduce the broad spatial features of the past 100 years of evolution of cropland and pastureland patterns. The modeling approach integrates economic theory, observed land use history, and data on both socioeconomic and biophysical determinants of land use change, and estimates relationships using long-term historical data, thereby making it suitable for long-term projections. The underlying economic motivation is maximization of expected profits by hypothesized landowners within each grid cell. The model predicts fractional land use for cropland and pastureland within each grid cell based on socioeconomic and biophysical driving factors that change with time. The model explicitly incorporates the following key features: (1) land use competition, (2) spatial heterogeneity in the nature of driving factors across geographic regions, (3) spatial heterogeneity in the relative importance of driving factors and previous land use patterns in determining land use allocation, and (4) spatial and temporal autocorrelation in land use patterns. We show that land use allocation approaches based solely on previous land use history (but disregarding the impact of driving factors), or those accounting for both land use history and driving factors by mechanistically fitting models for the spatial processes of land use change do not reproduce well long-term historical land use patterns. With an example application to the terrestrial carbon cycle, we show that such inaccuracies in land use allocation can translate into significant implications for global environmental assessments. The modeling approach and its evaluation provide an example that can be useful to the land use, Integrated Assessment, and the Earth system modeling
Uncertainty Quantification in Scale-Dependent Models of Flow in Porous Media: SCALE-DEPENDENT UQ
Energy Technology Data Exchange (ETDEWEB)
Tartakovsky, A. M. [Computational Mathematics Group, Pacific Northwest National Laboratory, Richland WA USA; Panzeri, M. [Dipartimento di Ingegneria Civile e Ambientale, Politecnico di Milano, Milano Italy; Tartakovsky, G. D. [Hydrology Group, Pacific Northwest National Laboratory, Richland WA USA; Guadagnini, A. [Dipartimento di Ingegneria Civile e Ambientale, Politecnico di Milano, Milano Italy
2017-11-01
Equations governing flow and transport in heterogeneous porous media are scale-dependent. We demonstrate that it is possible to identify a support scale $\\eta^*$, such that the typically employed approximate formulations of Moment Equations (ME) yield accurate (statistical) moments of a target environmental state variable. Under these circumstances, the ME approach can be used as an alternative to the Monte Carlo (MC) method for Uncertainty Quantification in diverse fields of Earth and environmental sciences. MEs are directly satisfied by the leading moments of the quantities of interest and are defined on the same support scale as the governing stochastic partial differential equations (PDEs). Computable approximations of the otherwise exact MEs can be obtained through perturbation expansion of moments of the state variables in orders of the standard deviation of the random model parameters. As such, their convergence is guaranteed only for the standard deviation smaller than one. We demonstrate our approach in the context of steady-state groundwater flow in a porous medium with a spatially random hydraulic conductivity.
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Tacit knowledge in academia: a proposed model and measurement scale.
Leonard, Nancy; Insch, Gary S
2005-11-01
The authors propose a multidimensional model of tacit knowledge and develop a measure of tacit knowledge in academia. They discuss the theory and extant literature on tacit knowledge and propose a 6-factor model. Experiment 1 is a replication of a recent study of academic tacit knowledge using the scale developed and administered at an Israeli university (A. Somech & R. Bogler, 1999). The results of the replication differed from those found in the original study. For Experiment 2, the authors developed a domain-specific measure of academic tacit knowledge, the Academic Tacit Knowledge Scale (ATKS), and used this measure to explore the multidimensionality of tacit knowledge proposed in the model. The results of an exploratory factor analysis (n=142) followed by a confirmatory factor analysis (n=286) are reported. The sample for both experiments was 428 undergraduate students enrolled at a large public university in the eastern United States. Results indicated that a 5-factor model of academic tacit knowledge provided a strong fit for the data.
Multi-scale modeling of carbon capture systems
Energy Technology Data Exchange (ETDEWEB)
Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-03
The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO_{2} capture. The sorbent model includes a detailed treatment of transport and amine-CO_{2}- H_{2}O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.
Scaling of coercivity in a 3d random anisotropy model
Energy Technology Data Exchange (ETDEWEB)
Proctor, T.C., E-mail: proctortc@gmail.com; Chudnovsky, E.M., E-mail: EUGENE.CHUDNOVSKY@lehman.cuny.edu; Garanin, D.A.
2015-06-15
The random-anisotropy Heisenberg model is numerically studied on lattices containing over ten million spins. The study is focused on hysteresis and metastability due to topological defects, and is relevant to magnetic properties of amorphous and sintered magnets. We are interested in the limit when ferromagnetic correlations extend beyond the size of the grain inside which the magnetic anisotropy axes are correlated. In that limit the coercive field computed numerically roughly scales as the fourth power of the random anisotropy strength and as the sixth power of the grain size. Theoretical arguments are presented that provide an explanation of numerical results. Our findings should be helpful for designing amorphous and nanosintered materials with desired magnetic properties. - Highlights: • We study the random-anisotropy model on lattices containing up to ten million spins. • Irreversible behavior due to topological defects (hedgehogs) is elucidated. • Hysteresis loop area scales as the fourth power of the random anisotropy strength. • In nanosintered magnets the coercivity scales as the six power of the grain size.
A model for AGN variability on multiple time-scales
Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.
2018-05-01
We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.
European Continental Scale Hydrological Model, Limitations and Challenges
Rouholahnejad, E.; Abbaspour, K.
2014-12-01
The pressures on water resources due to increasing levels of societal demand, increasing conflict of interest and uncertainties with regard to freshwater availability create challenges for water managers and policymakers in many parts of Europe. At the same time, climate change adds a new level of pressure and uncertainty with regard to freshwater supplies. On the other hand, the small-scale sectoral structure of water management is now reaching its limits. The integrated management of water in basins requires a new level of consideration where water bodies are to be viewed in the context of the whole river system and managed as a unit within their basins. In this research we present the limitations and challenges of modelling the hydrology of the continent Europe. The challenges include: data availability at continental scale and the use of globally available data, streamgauge data quality and their misleading impacts on model calibration, calibration of large-scale distributed model, uncertainty quantification, and computation time. We describe how to avoid over parameterization in calibration process and introduce a parallel processing scheme to overcome high computation time. We used Soil and Water Assessment Tool (SWAT) program as an integrated hydrology and crop growth simulator to model water resources of the Europe continent. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals for the period of 1970-2006. The use of a large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation and provides the overall picture of water resources temporal and spatial distribution across the continent. The calibrated model and results provide information support to the European Water
Islands Climatology at Local Scale. Downscaling with CIELO model
Azevedo, Eduardo; Reis, Francisco; Tomé, Ricardo; Rodrigues, Conceição
2016-04-01
Islands with horizontal scales of the order of tens of km, as is the case of the Atlantic Islands of Macaronesia, are subscale orographic features for Global Climate Models (GCMs) since the horizontal scales of these models are too coarse to give a detailed representation of the islands' topography. Even the Regional Climate Models (RCMs) reveals limitations when they are forced to reproduce the climate of small islands mainly by the way they flat and lowers the elevation of the islands, reducing the capacity of the model to reproduce important local mechanisms that lead to a very deep local climate differentiation. Important local thermodynamics mechanisms like Foehn effect, or the influence of topography on radiation balance, have a prominent role in the climatic spatial differentiation. Advective transport of air - and the consequent induced adiabatic cooling due to orography - lead to transformations of the state parameters of the air that leads to the spatial configuration of the fields of pressure, temperature and humidity. The same mechanism is in the origin of the orographic clouds cover that, besides the direct role as water source by the reinforcement of precipitation, act like a filter to direct solar radiation and as a source of long-wave radiation that affect the local balance of energy. Also, the saturation (or near saturation) conditions that they provide constitute a barrier to water vapour diffusion in the mechanisms of evapotranspiration. Topographic factors like slope, aspect and orographic mask have also significant importance in the local energy balance. Therefore, the simulation of the local scale climate (past, present and future) in these archipelagos requires the use of downscaling techniques to adjust locally outputs obtained at upper scales. This presentation will discuss and analyse the evolution of the CIELO model (acronym for Clima Insular à Escala LOcal) a statistical/dynamical technique developed at the University of the Azores
From micro-scale 3D simulations to macro-scale model of periodic porous media
Crevacore, Eleonora; Tosco, Tiziana; Marchisio, Daniele; Sethi, Rajandrea; Messina, Francesca
2015-04-01
In environmental engineering, the transport of colloidal suspensions in porous media is studied to understand the fate of potentially harmful nano-particles and to design new remediation technologies. In this perspective, averaging techniques applied to micro-scale numerical simulations are a powerful tool to extrapolate accurate macro-scale models. Choosing two simplified packing configurations of soil grains and starting from a single elementary cell (module), it is possible to take advantage of the periodicity of the structures to reduce the computation costs of full 3D simulations. Steady-state flow simulations for incompressible fluid in laminar regime are implemented. Transport simulations are based on the pore-scale advection-diffusion equation, that can be enriched introducing also the Stokes velocity (to consider the gravity effect) and the interception mechanism. Simulations are carried on a domain composed of several elementary modules, that serve as control volumes in a finite volume method for the macro-scale method. The periodicity of the medium involves the periodicity of the flow field and this will be of great importance during the up-scaling procedure, allowing relevant simplifications. Micro-scale numerical data are treated in order to compute the mean concentration (volume and area averages) and fluxes on each module. The simulation results are used to compare the micro-scale averaged equation to the integral form of the macroscopic one, making a distinction between those terms that could be computed exactly and those for which a closure in needed. Of particular interest it is the investigation of the origin of macro-scale terms such as the dispersion and tortuosity, trying to describe them with micro-scale known quantities. Traditionally, to study the colloidal transport many simplifications are introduced, such those concerning ultra-simplified geometry that usually account for a single collector. Gradual removal of such hypothesis leads to a
Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...
On Two-Scale Modelling of Heat and Mass Transfer
International Nuclear Information System (INIS)
Vala, J.; Stastnik, S.
2008-01-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
On Two-Scale Modelling of Heat and Mass Transfer
Vala, J.; Št'astník, S.
2008-09-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
GA-4 half-scale cask model fabrication
International Nuclear Information System (INIS)
Meyer, R.J.
1995-01-01
Unique fabrication experience was gained during the construction of a half-scale model of the GA-4 Legal Weight Truck Cask. Techniques were developed for forming, welding, and machining XM-19 stainless steel. Noncircular 'rings' of depleted uranium were cast and machined to close tolerances. The noncircular cask body, gamma shield, and cavity liner were produced using a nonconventional approach in which components were first machined to final size and then welded together using a low-distortion electron beam process. Special processes were developed for fabricating the bonded aluminum honeycomb impact limiters. The innovative design of the cask internals required precision deep hole drilling, low-distortion welding, and close tolerance machining. Valuable lessons learned were documented for use in future manufacturing of full-scale prototype and production units
Iso-scaling in a microcanonical multifragmentation model
International Nuclear Information System (INIS)
Raduta, R.; Raduta, H.
2003-01-01
A microcanonical multifragmentation model is used to investigate iso-scaling over a broad range of excitation energies, for several values of freeze-out volume and equilibrated sources with masses between 40 and 200 in both primary and asymptotic stages of the decay. It was found that the values of the slope parameters α and β depend on the size and excitation energy of the source and are affected by the secondary decay of primary fragments. It was evidenced that iso-scaling is affected by finite size effects. The evolution of the differences of neutron and proton chemical potentials corresponding to two equilibrated nuclear sources having the same size and different isospin values with temperature and freeze-out volume is presented. (authors)
Light moduli in almost no-scale models
International Nuclear Information System (INIS)
Buchmueller, Wilfried; Moeller, Jan; Schmidt, Jonas
2009-09-01
We discuss the stabilization of the compact dimension for a class of five-dimensional orbifold supergravity models. Supersymmetry is broken by the superpotential on a boundary. Classically, the size L of the fifth dimension is undetermined, with or without supersymmetry breaking, and the effective potential is of no-scale type. The size L is fixed by quantum corrections to the Kaehler potential, the Casimir energy and Fayet-Iliopoulos (FI) terms localized at the boundaries. For an FI scale of order M GUT , as in heterotic string compactifications with anomalous U(1) symmetries, one obtains L∝1/M GUT . A small mass is predicted for the scalar fluctuation associated with the fifth dimension, m ρ 3/2 /(L M). (orig.)
Research on large-scale wind farm modeling
Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng
2017-01-01
Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.
Pore-scale modeling of phase change in porous media
Juanes, Ruben; Cueto-Felgueroso, Luis; Fu, Xiaojing
2017-11-01
One of the main open challenges in pore-scale modeling is the direct simulation of flows involving multicomponent mixtures with complex phase behavior. Reservoir fluid mixtures are often described through cubic equations of state, which makes diffuse interface, or phase field theories, particularly appealing as a modeling framework. What is still unclear is whether equation-of-state-driven diffuse-interface models can adequately describe processes where surface tension and wetting phenomena play an important role. Here we present a diffuse interface model of single-component, two-phase flow (a van der Waals fluid) in a porous medium under different wetting conditions. We propose a simplified Darcy-Korteweg model that is appropriate to describe flow in a Hele-Shaw cell or a micromodel, with a gap-averaged velocity. We study the ability of the diffuse-interface model to capture capillary pressure and the dynamics of vaporization/condensation fronts, and show that the model reproduces pressure fluctuations that emerge from abrupt interface displacements (Haines jumps) and from the break-up of wetting films.
A multi-scale adaptive model of residential energy demand
International Nuclear Information System (INIS)
Farzan, Farbod; Jafari, Mohsen A.; Gong, Jie; Farzan, Farnaz; Stryker, Andrew
2015-01-01
Highlights: • We extend an energy demand model to investigate changes in behavioral and usage patterns. • The model is capable of analyzing why demand behaves the way it does. • The model empowers decision makers to investigate DSM strategies and effectiveness. • The model provides means to measure the effect of energy prices on daily profile. • The model considers the coupling effects of adopting multiple new technologies. - Abstract: In this paper, we extend a previously developed bottom-up energy demand model such that the model can be used to determine changes in behavioral and energy usage patterns of a community when: (i) new load patterns from Plug-in Electrical Vehicles (PEV) or other devices are introduced; (ii) new technologies and smart devices are used within premises; and (iii) new Demand Side Management (DSM) strategies, such as price responsive demand are implemented. Unlike time series forecasting methods that solely rely on historical data, the model only uses a minimal amount of data at the atomic level for its basic constructs. These basic constructs can be integrated into a household unit or a community model using rules and connectors that are, in principle, flexible and can be altered according to the type of questions that need to be answered. Furthermore, the embedded dynamics of the model works on the basis of: (i) Markovian stochastic model for simulating human activities, (ii) Bayesian and logistic technology adoption models, and (iii) optimization, and rule-based models to respond to price signals without compromising users’ comfort. The proposed model is not intended to replace traditional forecasting models. Instead it provides an analytical framework that can be used at the design stage of new products and communities to evaluate design alternatives. The framework can also be used to answer questions such as why demand behaves the way it does by examining demands at different scales and by playing What-If games. These
Large Scale Computing for the Modelling of Whole Brain Connectivity
DEFF Research Database (Denmark)
Albers, Kristoffer Jon
organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...
The breaking of Bjorken scaling in the covariant parton model
International Nuclear Information System (INIS)
Polkinghorne, J.C.
1976-01-01
Scale breaking is investigated in terms of a covariant parton model formulation of deep inelastic processes. It is shown that a consistent theory requires that the convergence properties of parton-hadron amplitudes should be modified as well as the parton being given form factors. Purely logarithmic violation is possible and the resulting model has many features in common with asymtotically free gauge theories. Behaviour at large and small ω and fixed q 2 is investigated. γW 2 should increase with q 2 at large ω and decrease with q 2 at small ω. Heuristic arguments are also given which suggest that the model would only lead to logarithmic modifications of dimensional counting results in purely hadronic deep scattering. (Auth.)
Density Functional Theory and Materials Modeling at Atomistic Length Scales
Directory of Open Access Journals (Sweden)
Swapan K. Ghosh
2002-04-01
Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.
Modeling and simulation of large scale stirred tank
Neuville, John R.
The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the
Traffic assignment models in large-scale applications
DEFF Research Database (Denmark)
Rasmussen, Thomas Kjær
the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...
Environmental Impacts of Large Scale Biochar Application Through Spatial Modeling
Huber, I.; Archontoulis, S.
2017-12-01
In an effort to study the environmental (emissions, soil quality) and production (yield) impacts of biochar application at regional scales we coupled the APSIM-Biochar model with the pSIMS parallel platform. So far the majority of biochar research has been concentrated on lab to field studies to advance scientific knowledge. Regional scale assessments are highly needed to assist decision making. The overall objective of this simulation study was to identify areas in the USA that have the most gain environmentally from biochar's application, as well as areas which our model predicts a notable yield increase due to the addition of biochar. We present the modifications in both APSIM biochar and pSIMS components that were necessary to facilitate these large scale model runs across several regions in the United States at a resolution of 5 arcminutes. This study uses the AgMERRA global climate data set (1980-2010) and the Global Soil Dataset for Earth Systems modeling as a basis for creating its simulations, as well as local management operations for maize and soybean cropping systems and different biochar application rates. The regional scale simulation analysis is in progress. Preliminary results showed that the model predicts that high quality soils (particularly those common to Iowa cropping systems) do not receive much, if any, production benefit from biochar. However, soils with low soil organic matter ( 0.5%) do get a noteworthy yield increase of around 5-10% in the best cases. We also found N2O emissions to be spatial and temporal specific; increase in some areas and decrease in some other areas due to biochar application. In contrast, we found increases in soil organic carbon and plant available water in all soils (top 30 cm) due to biochar application. The magnitude of these increases (% change from the control) were larger in soil with low organic matter (below 1.5%) and smaller in soils with high organic matter (above 3%) and also dependent on biochar
Gomez, Rapson; Watson, Shaun D.
2017-01-01
For the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) together, this study examined support for a bifactor model, and also the internal consistency reliability and external validity of the factors in this model. Participants (N = 526) were adults from the general community who completed the SPS and SIAS. Confirmatory factor analysis (CFA) of their ratings indicated good support for the bifactor model. For this model, the loadings for all but six items were higher o...
Hydrogen combustion modelling in large-scale geometries
International Nuclear Information System (INIS)
Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.
2014-01-01
Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)
CHOI, S.; Shi, Y.; Ni, X.; Simard, M.; Myneni, R. B.
2013-12-01
Sparseness in in-situ observations has precluded the spatially explicit and accurate mapping of forest biomass. The need for large-scale maps has raised various approaches implementing conjugations between forest biomass and geospatial predictors such as climate, forest type, soil property, and topography. Despite the improved modeling techniques (e.g., machine learning and spatial statistics), a common limitation is that biophysical mechanisms governing tree growth are neglected in these black-box type models. The absence of a priori knowledge may lead to false interpretation of modeled results or unexplainable shifts in outputs due to the inconsistent training samples or study sites. Here, we present a gray-box approach combining known biophysical processes and geospatial predictors through parametric optimizations (inversion of reference measures). Total aboveground biomass in forest stands is estimated by incorporating the Forest Inventory and Analysis (FIA) and Parameter-elevation Regressions on Independent Slopes Model (PRISM). Two main premises of this research are: (a) The Allometric Scaling and Resource Limitations (ASRL) theory can provide a relationship between tree geometry and local resource availability constrained by environmental conditions; and (b) The zeroth order theory (size-frequency distribution) can expand individual tree allometry into total aboveground biomass at the forest stand level. In addition to the FIA estimates, two reference maps from the National Biomass and Carbon Dataset (NBCD) and U.S. Forest Service (USFS) were produced to evaluate the model. This research focuses on a site-scale test of the biomass model to explore the robustness of predictors, and to potentially improve models using additional geospatial predictors such as climatic variables, vegetation indices, soil properties, and lidar-/radar-derived altimetry products (or existing forest canopy height maps). As results, the optimized ASRL estimates satisfactorily
Evaluation of a distributed catchment scale water balance model
Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.
1993-01-01
The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.
Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham
2018-06-01
This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.
Energy Technology Data Exchange (ETDEWEB)
Vanoost, D., E-mail: dries.vanoost@kuleuven-kulak.be [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); Steentjes, S. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany); Peuteman, J. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Electrical Energy and Computer Architecture, Heverlee B-3001 (Belgium); Gielen, G. [KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); De Gersem, H. [KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); TU Darmstadt, Institut für Theorie Elektromagnetischer Felder, Darmstadt D-64289 (Germany); Pissoort, D. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); Hameyer, K. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany)
2016-09-15
This paper proposes a multi-scale energy-based material model for poly-crystalline materials. Describing the behaviour of poly-crystalline materials at three spatial scales of dominating physical mechanisms allows accounting for the heterogeneity and multi-axiality of the material behaviour. The three spatial scales are the poly-crystalline, grain and domain scale. Together with appropriate scale transitions rules and models for local magnetic behaviour at each scale, the model is able to describe the magneto-elastic behaviour (magnetostriction and hysteresis) at the macroscale, although the data input is merely based on a set of physical constants. Introducing a new energy density function that describes the demagnetisation field, the anhysteretic multi-scale energy-based material model is extended to the hysteretic case. The hysteresis behaviour is included at the domain scale according to the micro-magnetic domain theory while preserving a valid description for the magneto-elastic coupling. The model is verified using existing measurement data for different mechanical stress levels. - Highlights: • A ferromagnetic hysteretic energy-based multi-scale material model is proposed. • The hysteresis is obtained by new proposed hysteresis energy density function. • Avoids tedious parameter identification.
Device Scale Modeling of Solvent Absorption using MFIX-TFM
Energy Technology Data Exchange (ETDEWEB)
Carney, Janine E. [National Energy Technology Lab. (NETL), Albany, OR (United States); Finn, Justin R. [National Energy Technology Lab. (NETL), Albany, OR (United States); Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States)
2016-10-01
Recent climate change is largely attributed to greenhouse gases (e.g., carbon dioxide, methane) and fossil fuels account for a large majority of global CO_{2} emissions. That said, fossil fuels will continue to play a significant role in the generation of power for the foreseeable future. The extent to which CO_{2} is emitted needs to be reduced, however, carbon capture and sequestration are also necessary actions to tackle climate change. Different approaches exist for CO_{2} capture including both post-combustion and pre-combustion technologies, oxy-fuel combustion and/or chemical looping combustion. The focus of this effort is on post-combustion solvent-absorption technology. To apply CO_{2 technologies at commercial scale, the availability and maturity and the potential for scalability of that technology} need to be considered. Solvent absorption is a proven technology but not at the scale needed by typical power plant. The scale up and down and design of laboratory and commercial packed bed reactors depends heavily on the specific knowledge of two-phase pressure drop, liquid holdup, the wetting efficiency and mass transfer efficiency as a function of operating conditions. Simple scaling rules often fail to provide proper design. Conventional reactor design modeling approaches will generally characterize complex non-ideal flow and mixing patterns using simplified and/or mechanistic flow assumptions. While there are varying levels of complexity used within these approaches, none of these models resolve the local velocity fields. Consequently, they are unable to account for important design factors such as flow maldistribution and channeling from a fundamental perspective. Ideally design would be aided by development of predictive models based on truer representation of the physical and chemical processes that occur at different scales. Computational fluid dynamic (CFD) models are based on multidimensional flow equations with first
Modelling biological invasions: Individual to population scales at interfaces
Belmonte-Beitia, J.
2013-10-01
Extracting the population level behaviour of biological systems from that of the individual is critical in understanding dynamics across multiple scales and thus has been the subject of numerous investigations. Here, the influence of spatial heterogeneity in such contexts is explored for interfaces with a separation of the length scales characterising the individual and the interface, a situation that can arise in applications involving cellular modelling. As an illustrative example, we consider cell movement between white and grey matter in the brain which may be relevant in considering the invasive dynamics of glioma. We show that while one can safely neglect intrinsic noise, at least when considering glioma cell invasion, profound differences in population behaviours emerge in the presence of interfaces with only subtle alterations in the dynamics at the individual level. Transport driven by local cell sensing generates predictions of cell accumulations along interfaces where cell motility changes. This behaviour is not predicted with the commonly used Fickian diffusion transport model, but can be extracted from preliminary observations of specific cell lines in recent, novel, cryo-imaging. Consequently, these findings suggest a need to consider the impact of individual behaviour, spatial heterogeneity and especially interfaces in experimental and modelling frameworks of cellular dynamics, for instance in the characterisation of glioma cell motility. © 2013 Elsevier Ltd.
9 m side drop test of scale model
International Nuclear Information System (INIS)
Ku, Jeong-Hoe; Chung, Seong-Hwan; Lee, Ju-Chan; Seo, Ki-Seog
1993-01-01
A type B(U) shipping cask had been developed in KAERI for transporting PWR spent fuel. Since the cask is to transport spent PWR fuel, it must be designed to meet all of the structural requirements specified in domestic packaging regulations and IAEA safety series No.6. This paper describes the side drop testing of a one - third scale model cask. The crush and deformations of the shock absorbing covers directly control the deceleration experiences of the cask during the 9 m side drop impact. The shock absorbing covers greatly mitigated the inertia forces of the cask body due to the side drop impact. Compared with the side drop test and finite element analysis, it was verified that the 1/3 scale model cask maintain its structural integrity of the model cask under the side drop impact. The test and analysis results could be used as the basic data to evaluate the structural integrity of the real cask. (J.P.N.)
Modelling biological invasions: Individual to population scales at interfaces
Belmonte-Beitia, J.; Woolley, T.E.; Scott, J.G.; Maini, P.K.; Gaffney, E.A.
2013-01-01
Extracting the population level behaviour of biological systems from that of the individual is critical in understanding dynamics across multiple scales and thus has been the subject of numerous investigations. Here, the influence of spatial heterogeneity in such contexts is explored for interfaces with a separation of the length scales characterising the individual and the interface, a situation that can arise in applications involving cellular modelling. As an illustrative example, we consider cell movement between white and grey matter in the brain which may be relevant in considering the invasive dynamics of glioma. We show that while one can safely neglect intrinsic noise, at least when considering glioma cell invasion, profound differences in population behaviours emerge in the presence of interfaces with only subtle alterations in the dynamics at the individual level. Transport driven by local cell sensing generates predictions of cell accumulations along interfaces where cell motility changes. This behaviour is not predicted with the commonly used Fickian diffusion transport model, but can be extracted from preliminary observations of specific cell lines in recent, novel, cryo-imaging. Consequently, these findings suggest a need to consider the impact of individual behaviour, spatial heterogeneity and especially interfaces in experimental and modelling frameworks of cellular dynamics, for instance in the characterisation of glioma cell motility. © 2013 Elsevier Ltd.
Evaluation of deconvolution modelling applied to numerical combustion
Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît
2018-01-01
A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.
Challenges of Modeling Flood Risk at Large Scales
Guin, J.; Simic, M.; Rowe, J.
2009-04-01
Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing
Physics and Dynamics Coupling Across Scales in the Next Generation CESM. Final Report
Energy Technology Data Exchange (ETDEWEB)
Bacmeister, Julio T. [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States)
2015-06-12
This project examines physics/dynamics coupling, that is, exchange of meteorological profiles and tendencies between an atmospheric model’s dynamical core and its various physics parameterizations. Most model physics parameterizations seek to represent processes that occur on scales smaller than the smallest scale resolved by the dynamical core. As a consequence a key conceptual aspect of parameterizations is an assumption about the subgrid variability of quantities such as temperature, humidity or vertical wind. Most existing parameterizations of processes such as turbulence, convection, cloud, and gravity wave drag make relatively ad hoc assumptions about this variability and are forced to introduce empirical parameters, i.e., “tuning knobs” to obtain realistic simulations. These knobs make systematic dependences on model grid size difficult to quantify.
Scale Adaptive Simulation Model for the Darrieus Wind Turbine
DEFF Research Database (Denmark)
Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.
2016-01-01
Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine...... the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads...
Enhanced learning through scale models and see-thru visualization
International Nuclear Information System (INIS)
Kelley, M.D.
1987-01-01
The development of PowerSafety International's See-Thru Power Plant has provided the nuclear industry with a bridge that can span the gap between the part-task simulator and the full-scope, high-fidelity plant simulator. The principle behind the See-Thru Power Plant is to provide the use of sensory experience in nuclear training programs. The See-Thru Power Plant is a scaled down, fully functioning model of a commercial nuclear power plant, equipped with a primary system, secondary system, and control console. The major components are constructed of glass, thus permitting visual conceptualization of a working nuclear power plant
LBM estimation of thermal conductivity in meso-scale modelling
International Nuclear Information System (INIS)
Grucelski, A
2016-01-01
Recently, there is a growing engineering interest in more rigorous prediction of effective transport coefficients for multicomponent, geometrically complex materials. We present main assumptions and constituents of the meso-scale model for the simulation of the coal or biomass devolatilisation with the Lattice Boltzmann method. For the results, the estimated values of the thermal conductivity coefficient of coal (solids), pyrolytic gases and air matrix are presented for a non-steady state with account for chemical reactions in fluid flow and heat transfer. (paper)
Large-scale modeling of rain fields from a rain cell deterministic model
FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia
2006-04-01
A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.
International Nuclear Information System (INIS)
ColIn, Pedro; Vazquez-Semadeni, Enrique; Avila-Reese, Vladimir; Valenzuela, Octavio; Ceverino, Daniel
2010-01-01
We present numerical simulations aimed at exploring the effects of varying the sub-grid physics parameters on the evolution and the properties of the galaxy formed in a low-mass dark matter halo (∼7 x 10 10 h -1 M sun at redshift z = 0). The simulations are run within a cosmological setting with a nominal resolution of 218 pc comoving and are stopped at z = 0.43. For simulations that cannot resolve individual molecular clouds, we propose the criterion that the threshold density for star formation, n SF , should be chosen such that the column density of the star-forming cells equals the threshold value for molecule formation, N ∼ 10 21 cm -2 , or ∼8 M sun pc -2 . In all of our simulations, an extended old/intermediate-age stellar halo and a more compact younger stellar disk are formed, and in most cases, the halo's specific angular momentum is slightly larger than that of the galaxy, and sensitive to the SF/feedback parameters. We found that a non-negligible fraction of the halo stars are formed in situ in a spheroidal distribution. Changes in the sub-grid physics parameters affect significantly and in a complex way the evolution and properties of the galaxy: (1) lower threshold densities n SF produce larger stellar effective radii R e , less peaked circular velocity curves V c (R), and greater amounts of low-density and hot gas in the disk mid-plane; (2) when stellar feedback is modeled by temporarily switching off radiative cooling in the star-forming regions, R e increases (by a factor of ∼2 in our particular model), the circular velocity curve becomes flatter, and a complex multi-phase gaseous disk structure develops; (3) a more efficient local conversion of gas mass to stars, measured by a stellar particle mass distribution biased toward larger values, increases the strength of the feedback energy injection-driving outflows and inducing burstier SF histories; (4) if feedback is too strong, gas loss by galactic outflows-which are easier to produce in low
Photorealistic large-scale urban city model reconstruction.
Poullis, Charalambos; You, Suya
2009-01-01
The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).
International Nuclear Information System (INIS)
Huerta, M.; Lamoreaux, G.H.; Romesberg, L.E.; Yoshimura, H.R.; Joseph, B.J.; May, R.A.
1983-01-01
This report describes extensive full-scale and scale-model testing of 55-gallon drums used for shipping low-level radioactive waste materials. The tests conducted include static crush, single-can impact tests, and side impact tests of eight stacked drums. Static crush forces were measured and crush energies calculated. The tests were performed in full-, quarter-, and eighth-scale with different types of waste materials. The full-scale drums were modeled with standard food product cans. The response of the containers is reported in terms of drum deformations and lid behavior. The results of the scale model tests are correlated to the results of the full-scale drums. Two computer techniques for calculating the response of drum stacks are presented. 83 figures, 9 tables
Protein homology model refinement by large-scale energy optimization.
Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David
2018-03-20
Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.
A Dynamic Pore-Scale Model of Imbibition
DEFF Research Database (Denmark)
Mogensen, Kristian; Stenby, Erling Halfdan
1998-01-01
We present a dynamic pore-scale network model of imbibition, capable of calculating residual oil saturation for any given capillary number, viscosity ratio, contact angle and aspect ratio. Our goal is not to predict the outcome of core floods, but rather to perform a sensitivity analysis...... of the above-mentioned parameters, except the viscosity ratio. We find that contact angle, aspect ratio and capillary number all have a significant influence on the competition between piston-like advance, leading to high recovery, and snap-off, causing oil entrapment. Due to enormous CPU-time requirements we...... been entirely inhibited, in agreement with results obtained by Blunt using a quasi-static model. For higher aspect ratios, the effect of rate and contact angle is more pronounced. Many core floods are conducted at capillary numbers in the range 10 to10.6. We believe that the excellent recoveries...
Uncertainty Quantification for Large-Scale Ice Sheet Modeling
Energy Technology Data Exchange (ETDEWEB)
Ghattas, Omar [Univ. of Texas, Austin, TX (United States)
2016-02-05
This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.
Models for inflation with a low supersymmetry-breaking scale
International Nuclear Information System (INIS)
Binetruy, P.; California Univ., Santa Barbara; Mahajan, S.; California Univ., Berkeley
1986-01-01
We present models where the same scalar field is reponsible for inflation and for the breaking of supersymmetry. The scale of supersymmetry breaking is related to the slope of the potential in the plateau region described by the scalar field during the slow rollover, and the gravitino mass can therefore be kept as small as Msub(W), the mass of the weak gauge boson. We show that such a result is stable under radiative corrections. We describe the inflationary scenario corresponding to the simplest of these models and show that no major problem arises, except for a violation of the thermal constraint (stabilization of the field in the plateau region at high temperature). We discuss the possibility of introducing a second scalar field to satisfy this constraint. (orig.)
Regional Scale Modelling for Exploring Energy Strategies for Africa
International Nuclear Information System (INIS)
Welsch, M.
2015-01-01
KTH Royal Institute of Technology was founded in 1827 and it is the largest technical university in Sweden with five campuses and Around 15,000 students. KTH-dESA combines an outstanding knowledge in the field of energy systems analysis. This is demonstrated by the successful collaborations with many (UN) organizations. Regional Scale Modelling for Exploring Energy Strategies for Africa include Assessing renewable energy potentials; Analysing investment strategies; ) Assessing climate resilience; Comparing electrification options; Providing web-based decision support; and Quantifying energy access. It is conclude that Strategies required to ensure a robust and flexible energy system (-> no-regret choices); Capacity investments should be in line with national & regional strategies; Climate change important to consider, as it may strongly influence the energy flows in a region; Long-term models can help identify robust energy investment strategies and pathways that Can help assess future markets and profitability of individual projects
Scale modeling flow-induced vibrations of reactor components
International Nuclear Information System (INIS)
Mulcahy, T.M.
1982-06-01
Similitude relationships currently employed in the design of flow-induced vibration scale-model tests of nuclear reactor components are reviewed. Emphasis is given to understanding the origins of the similitude parameters as a basis for discussion of the inevitable distortions which occur in design verification testing of entire reactor systems and in feature testing of individual component designs for the existence of detrimental flow-induced vibration mechanisms. Distortions of similitude parameters made in current test practice are enumerated and selected example tests are described. Also, limitations in the use of specific distortions in model designs are evaluated based on the current understanding of flow-induced vibration mechanisms and structural response
Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.
2017-12-01
Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.
Multi-scale modelling for HEDP experiments on Orion
Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.
2016-05-01
The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.
Parameter study on dynamic behavior of ITER tokamak scaled model
International Nuclear Information System (INIS)
Nakahira, Masataka; Takeda, Nobukazu
2004-12-01
This report summarizes that the study on dynamic behavior of ITER tokamak scaled model according to the parametric analysis of base plate thickness, in order to find a reasonable solution to give the sufficient rigidity without affecting the dynamic behavior. For this purpose, modal analyses were performed changing the base plate thickness from the present design of 55 mm to 100 mm, 150 mm and 190 mm. Using these results, the modification plan of the plate thickness was studied. It was found that the thickness of 150 mm gives well fitting of 1st natural frequency about 90% of ideal rigid case. Thus, the modification study was performed to find out the adequate plate thickness. Considering the material availability, transportation and weldability, it was found that the 300mm thickness would be a limitation. The analysis result of 300mm thickness case showed 97% fitting of 1st natural frequency to the ideal rigid case. It was however found that the bolt length was too long and it gave additional twisting mode. As a result, it was concluded that the base plate thickness of 150mm or 190mm gives sufficient rigidity for the dynamic behavior of the scaled model. (author)
Physically representative atomistic modeling of atomic-scale friction
Dong, Yalin
Nanotribology is a research field to study friction, adhesion, wear and lubrication occurred between two sliding interfaces at nano scale. This study is motivated by the demanding need of miniaturization mechanical components in Micro Electro Mechanical Systems (MEMS), improvement of durability in magnetic storage system, and other industrial applications. Overcoming tribological failure and finding ways to control friction at small scale have become keys to commercialize MEMS with sliding components as well as to stimulate the technological innovation associated with the development of MEMS. In addition to the industrial applications, such research is also scientifically fascinating because it opens a door to understand macroscopic friction from the most bottom atomic level, and therefore serves as a bridge between science and engineering. This thesis focuses on solid/solid atomic friction and its associated energy dissipation through theoretical analysis, atomistic simulation, transition state theory, and close collaboration with experimentalists. Reduced-order models have many advantages for its simplification and capacity to simulating long-time event. We will apply Prandtl-Tomlinson models and their extensions to interpret dry atomic-scale friction. We begin with the fundamental equations and build on them step-by-step from the simple quasistatic one-spring, one-mass model for predicting transitions between friction regimes to the two-dimensional and multi-atom models for describing the effect of contact area. Theoretical analysis, numerical implementation, and predicted physical phenomena are all discussed. In the process, we demonstrate the significant potential for this approach to yield new fundamental understanding of atomic-scale friction. Atomistic modeling can never be overemphasized in the investigation of atomic friction, in which each single atom could play a significant role, but is hard to be captured experimentally. In atomic friction, the
Correction of Excessive Precipitation Over Steep and High Mountains in a General Circulation Model
Chao, Winston C.
2012-01-01
Excessive precipitation over steep and high mountains (EPSM) is a well-known problem in GCMs and meso-scale models. This problem impairs simulation and data assimilation products. Among the possible causes investigated in this study, we found that the most important one, by far, is a missing upward transport of heat out of the boundary layer due to the vertical circulations forced by the daytime upslope winds, which are forced by the heated boundary layer on subgrid-scale slopes. These upslope winds are associated with large subgrid-scale topographic variation, which is found over steep and high mountains. Without such subgridscale heat ventilation, the resolvable-scale upslope flow in the boundary layer generated by surface sensible heat flux along the mountain slopes is excessive. Such an excessive resolvablescale upslope flow combined with the high moisture content in the boundary layer results in excessive moisture transport toward mountaintops, which in turn gives rise to EPSM. Other possible causes of EPSM that we have investigated include 1) a poorly-designed horizontal moisture flux in the terrain-following coordinates, 2) the condition for cumulus convection being too easily satisfied at mountaintops, 3) the presence of conditional instability of the computational kind, and 4) the absence of blocked flow drag. These are all minor or inconsequential. We have parameterized the ventilation effects of the subgrid-scale heated-slope-induced vertical circulation (SHVC) by removing heat from the boundary layer and depositing it in layers higher up when the topographic variance exceeds a critical value. Test results using NASA/Goddard's GEOS-S GCM have shown that this largely solved the EPSM problem.
Lithospheric-scale centrifuge models of pull-apart basins
Corti, Giacomo; Dooley, Tim P.
2015-11-01
We present here the results of the first lithospheric-scale centrifuge models of pull-apart basins. The experiments simulate relative displacement of two lithospheric blocks along two offset master faults, with the presence of a weak zone in the offset area localising deformation during strike-slip displacement. Reproducing the entire lithosphere-asthenosphere system provides boundary conditions that are more realistic than the horizontal detachment in traditional 1 g experiments and thus provide a better approximation of the dynamic evolution of natural pull-apart basins. Model results show that local extension in the pull-apart basins is accommodated through development of oblique-slip faulting at the basin margins and cross-basin faults obliquely cutting the rift depression. As observed in previous modelling studies, our centrifuge experiments suggest that the angle of offset between the master fault segments is one of the most important parameters controlling the architecture of pull-apart basins: the basins are lozenge shaped in the case of underlapping master faults, lazy-Z shaped in case of neutral offset and rhomboidal shaped for overlapping master faults. Model cross sections show significant along-strike variations in basin morphology, with transition from narrow V- and U-shaped grabens to a more symmetric, boxlike geometry passing from the basin terminations to the basin centre; a flip in the dominance of the sidewall faults from one end of the basin to the other is observed in all models. These geometries are also typical of 1 g models and characterise several pull-apart basins worldwide. Our models show that the complex faulting in the upper brittle layer corresponds at depth to strong thinning of the ductile layer in the weak zone; a rise of the base of the lithosphere occurs beneath the basin, and maximum lithospheric thinning roughly corresponds to the areas of maximum surface subsidence (i.e., the basin depocentre).
Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh
2011-01-01
Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...
A rate-dependent multi-scale crack model for concrete
Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.
2013-01-01
A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate
Global fits of GUT-scale SUSY models with GAMBIT
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.
Global fits of GUT-scale SUSY models with GAMBIT
Energy Technology Data Exchange (ETDEWEB)
Athron, Peter [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); H. Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, CNRS, ENS de Lyon, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); Theoretical Physics Department, CERN, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Camperdown, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration
2017-12-15
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos. (orig.)
Urban scale air quality modelling using detailed traffic emissions estimates
Borrego, C.; Amorim, J. H.; Tchepel, O.; Dias, D.; Rafael, S.; Sá, E.; Pimentel, C.; Fontes, T.; Fernandes, P.; Pereira, S. R.; Bandeira, J. M.; Coelho, M. C.
2016-04-01
The atmospheric dispersion of NOx and PM10 was simulated with a second generation Gaussian model over a medium-size south-European city. Microscopic traffic models calibrated with GPS data were used to derive typical driving cycles for each road link, while instantaneous emissions were estimated applying a combined Vehicle Specific Power/Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (VSP/EMEP) methodology. Site-specific background concentrations were estimated using time series analysis and a low-pass filter applied to local observations. Air quality modelling results are compared against measurements at two locations for a 1 week period. 78% of the results are within a factor of two of the observations for 1-h average concentrations, increasing to 94% for daily averages. Correlation significantly improves when background is added, with an average of 0.89 for the 24 h record. The results highlight the potential of detailed traffic and instantaneous exhaust emissions estimates, together with filtered urban background, to provide accurate input data to Gaussian models applied at the urban scale.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
Nawalany, Marek; Sinicyn, Grzegorz
2015-09-01
An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
Directory of Open Access Journals (Sweden)
Nawalany Marek
2015-09-01
Full Text Available An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i spatial extent and geometry of hydrogeological system, (ii spatial continuity and granularity of both natural and man-made objects within the system, (iii duration of the system and (iv continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale – scale of pores, meso-scale – scale of laboratory sample, macro-scale – scale of typical blocks in numerical models of groundwater flow, local-scale – scale of an aquifer/aquitard and regional-scale – scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here. Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Site-scale groundwater flow modelling of Ceberg
Energy Technology Data Exchange (ETDEWEB)
Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)
1999-06-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracturezones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of {epsilon}{sub f} 10{sup -4} and a flow-wetted surface area of a{sub r} = 0.1 m{sup 2}/(m{sup 3} rock): The median travel time is 1720 years. The median canister flux is 3.27x10{sup -5} m/year. The median F-ratio is 1.72x10{sup 6} years/m. The base case and the deterministic variant suggest that the variability of the travel times within
Site-scale groundwater flow modelling of Ceberg
International Nuclear Information System (INIS)
Walker, D.; Gylling, B.
1999-06-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracture zones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of ε f 10 -4 and a flow-wetted surface area of a r = 0.1 m 2 /(m 3 rock): The median travel time is 1720 years. The median canister flux is 3.27x10 -5 m/year. The median F-ratio is 1.72x10 6 years/m. The base case and the deterministic variant suggest that the variability of the travel times within individual realisations is due to the
Impact of Scattering Model on Disdrometer Derived Attenuation Scaling
Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo (Compiler)
2016-01-01
NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 GHz attenuation from the disdrometer and the 20 GHz timeseries as well as to directly measure the 40 GHz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data.In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
DEFF Research Database (Denmark)
King, Zachary A.; Lu, Justin; Dräger, Andreas
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repo...
International Nuclear Information System (INIS)
Baker, I. T.; Sellers, P. J.; Denning, A. S.; Medina, I.; Kraus, P.
2017-01-01
The interaction of land with the atmosphere is sensitive to soil moisture (W). Evapotranspiration (ET) reacts to soil moisture in a nonlinear way, f(W), as soils dry from saturation to wilt point. This nonlinear behavior and the fact that soil moisture varies on scales as small as 1–10 m in nature, while numerical general circulation models (GCMs) have grid cell sizes on the order of 1 to 100s of kilometers, makes the calculation of grid cell-average ET problematic. It is impractical to simulate the land in GCMs on the small scales seen in nature, so techniques have been developed to represent subgrid scale heterogeneity, including: (1) statistical-dynamical representations of grid subelements of varying wetness, (2) relaxation of f(W), (3) moderating f(W) with approximations of catchment hydrology, (4) “tiling” the landscape into vegetation types, and (5) hyperresolution. Here we present an alternative method for representing subgrid variability in W, one proven in a conceptual framework where landscape-scale W is represented as a series of “Bins” of increasing wetness from dry to saturated. The grid cell-level f(W) is defined by the integral of the fractional area of the wetness bins and the value of f(W) associated with each. This approach accounts for the spatiotemporal dynamics of W. We implemented this approach in the SiB3 land surface parameterization and then evaluated its performance against a control, which assumes a horizontally uniform field of W. We demonstrate that the Bins method, with a physical basis, attenuates unrealistic jumps in model state and ET seen in the control runs.
Numerical Modeling of Large-Scale Rocky Coastline Evolution
Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.
2008-12-01
Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment
DEFF Research Database (Denmark)
Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Yilmaz, Ali Emre
2018-01-01
to match closely for all codes. The value of the Smagorinsky coefficient in the subgrid-scale turbulence model is shown to have a negligible effect on the time-averaged loads along the blades. Conversely, the breakdown location of the wake is strongly dependent on the Smagorinsky coefficient in uniform...... coefficient has a negligible effect on the wake profiles. It is concluded that for LES of wind turbines and wind farms using ALM, car