WorldWideScience

Sample records for scale models

  1. Genome-Scale Models

    DEFF Research Database (Denmark)

    Bergdahl, Basti; Sonnenschein, Nikolaus; Machado, Daniel

    2016-01-01

    An introduction to genome-scale models, how to build and use them, will be given in this chapter. Genome-scale models have become an important part of systems biology and metabolic engineering, and are increasingly used in research, both in academica and in industry, both for modeling chemical...

  2. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  3. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale....

  4. Integrating Local Scale Drainage Measures in Meso Scale Catchment Modelling

    Directory of Open Access Journals (Sweden)

    Sandra Hellmers

    2017-01-01

    Full Text Available This article presents a methodology to optimize the integration of local scale drainage measures in catchment modelling. The methodology enables to zoom into the processes (physically, spatially and temporally where detailed physical based computation is required and to zoom out where lumped conceptualized approaches are applied. It allows the definition of parameters and computation procedures on different spatial and temporal scales. Three methods are developed to integrate features of local scale drainage measures in catchment modelling: (1 different types of local drainage measures are spatially integrated in catchment modelling by a data mapping; (2 interlinked drainage features between data objects are enabled on the meso, local and micro scale; (3 a method for modelling multiple interlinked layers on the micro scale is developed. For the computation of flow routing on the meso scale, the results of the local scale measures are aggregated according to their contributing inlet in the network structure. The implementation of the methods is realized in a semi-distributed rainfall-runoff model. The implemented micro scale approach is validated with a laboratory physical model to confirm the credibility of the model. A study of a river catchment of 88 km2 illustrated the applicability of the model on the regional scale.

  5. Brane World Models Need Low String Scale

    CERN Document Server

    Antoniadis, Ignatios; Calmet, Xavier

    2011-01-01

    Models with large extra dimensions offer the possibility of the Planck scale being of order the electroweak scale, thus alleviating the gauge hierarchy problem. We show that these models suffer from a breakdown of unitarity at around three quarters of the low effective Planck scale. An obvious candidate to fix the unitarity problem is string theory. We therefore argue that it is necessary for the string scale to appear below the effective Planck scale and that the first signature of such models would be string resonances. We further translate experimental bounds on the string scale into bounds on the effective Planck scale.

  6. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  7. Scaling limits of a model for selection at two scales

    Science.gov (United States)

    Luo, Shishi; Mattingly, Jonathan C.

    2017-04-01

    The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval [0,1] with dependence on a single parameter, λ. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on λ and the behavior of the initial data around 1. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.

  8. Simple scaling model for exploding pusher targets

    Energy Technology Data Exchange (ETDEWEB)

    Storm, E.K.; Larsen, J.T.; Nuckolls, J.H.; Ahlstrom, H.G.; Manes, K.R.

    1977-11-04

    A simple model has been developed which when normalized by experiment or Lasnex calculations can be used to scale neutron yields for variations in laser input power and pulse length and target radius and wall thickness. The model also illucidates some of the physical processes occurring in this regime of laser fusion experiments. Within certain limitations on incident intensity and target geometry, the model scales with experiments and calculations to within a factor of two over six decades in neutron yield.

  9. Functional Scaling of Musculoskeletal Models

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark

    specific to the patient. This is accomplished using optimisation methods to determine patient-specific joint positions and orientations, which minimise the least-squares error between model markers and the recorded markers from a motion capture experiment. Functional joint positions and joint axis...

  10. Modeling interactome: scale-free or geometric?

    Science.gov (United States)

    Przulj, N; Corneil, D G; Jurisica, I

    2004-12-12

    Networks have been used to model many real-world phenomena to better understand the phenomena and to guide experiments in order to predict their behavior. Since incorrect models lead to incorrect predictions, it is vital to have as accurate a model as possible. As a result, new techniques and models for analyzing and modeling real-world networks have recently been introduced. One example of large and complex networks involves protein-protein interaction (PPI) networks. We analyze PPI networks of yeast Saccharomyces cerevisiae and fruitfly Drosophila melanogaster using a newly introduced measure of local network structure as well as the standardly used measures of global network structure. We examine the fit of four different network models, including Erdos-Renyi, scale-free and geometric random network models, to these PPI networks with respect to the measures of local and global network structure. We demonstrate that the currently accepted scale-free model of PPI networks fails to fit the data in several respects and show that a random geometric model provides a much more accurate model of the PPI data. We hypothesize that only the noise in these networks is scale-free. We systematically evaluate how well-different network models fit the PPI networks. We show that the structure of PPI networks is better modeled by a geometric random graph than by a scale-free model. Supplementary information is available at http://www.cs.utoronto.ca/~juris/data/data/ppiGRG04/

  11. Seamless cross-scale modeling with SCHISM

    Science.gov (United States)

    Zhang, Yinglong J.; Ye, Fei; Stanev, Emil V.; Grashorn, Sebastian

    2016-06-01

    We present a new 3D unstructured-grid model (SCHISM) which is an upgrade from an existing model (SELFE). The new advection scheme for the momentum equation includes an iterative smoother to reduce excess mass produced by higher-order kriging method, and a new viscosity formulation is shown to work robustly for generic unstructured grids and effectively filter out spurious modes without introducing excessive dissipation. A new higher-order implicit advection scheme for transport (TVD2) is proposed to effectively handle a wide range of Courant numbers as commonly found in typical cross-scale applications. The addition of quadrangular elements into the model, together with a recently proposed, highly flexible vertical grid system (Zhang et al., A new vertical coordinate system for a 3D unstructured-grid model. Ocean Model. 85, 2015), leads to model polymorphism that unifies 1D/2DH/2DV/3D cells in a single model grid. Results from several test cases demonstrate the model's good performance in the eddying regime, which presents greater challenges for unstructured-grid models and represents the last missing link for our cross-scale model. The model can thus be used to simulate cross-scale processes in a seamless fashion (i.e. from deep ocean into shallow depths).

  12. Site-Scale Saturated Zone Flow Model

    Energy Technology Data Exchange (ETDEWEB)

    G. Zyvoloski

    2003-12-17

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca

  13. Sub-Grid Scale Plume Modeling

    Directory of Open Access Journals (Sweden)

    Greg Yarwood

    2011-08-01

    Full Text Available Multi-pollutant chemical transport models (CTMs are being routinely used to predict the impacts of emission controls on the concentrations and deposition of primary and secondary pollutants. While these models have a fairly comprehensive treatment of the governing atmospheric processes, they are unable to correctly represent processes that occur at very fine scales, such as the near-source transport and chemistry of emissions from elevated point sources, because of their relatively coarse horizontal resolution. Several different approaches have been used to address this limitation, such as using fine grids, adaptive grids, hybrid modeling, or an embedded sub-grid scale plume model, i.e., plume-in-grid (PinG modeling. In this paper, we first discuss the relative merits of these various approaches used to resolve sub-grid scale effects in grid models, and then focus on PinG modeling which has been very effective in addressing the problems listed above. We start with a history and review of PinG modeling from its initial applications for ozone modeling in the Urban Airshed Model (UAM in the early 1980s using a relatively simple plume model, to more sophisticated and state-of-the-science plume models, that include a full treatment of gas-phase, aerosol, and cloud chemistry, embedded in contemporary models such as CMAQ, CAMx, and WRF-Chem. We present examples of some typical results from PinG modeling for a variety of applications, discuss the implications of PinG on model predictions of source attribution, and discuss possible future developments and applications for PinG modeling.

  14. Phenomenology of Low Quantum Gravity Scale Models

    CERN Document Server

    Benakli, Karim

    1999-01-01

    We study some phenomenological implications of models where the scale of quantum gravity effects lies much below the four-dimensional Planck scale. These models arise from M-theory vacua where either the internal space volume is large or the string coupling is very small. We provide a critical analysis of ways to unify electroweak, strong and gravitational interactions in M-theory. We discuss the relations between different scales in two M-vacua: Type I strings and Ho\\v rava--Witten supergravity models. The latter allows possibilities for an eleven-dimensional scale at TeV energies with one large dimension below separating our four-dimensional world from a hidden one. Different mechanisms for breaking supersymmetry (gravity mediated, gauge mediated and Scherk-Schwarz mechanisms) are discussed in this framework. Some phenomenological issues such as dark matter (with masses that may vary in time), origin of neutrino masses and axion scale are discussed. We suggest that these are indications that the string scal...

  15. Scaling model for symmetric star polymers

    Science.gov (United States)

    Ramachandran, Ram; Rai, Durgesh K.; Beaucage, Gregory

    2010-03-01

    Neutron scattering data from symmetric star polymers with six poly (urethane-ether) arms, chemically bonded to a C-60 molecule are fitted using a new scaling model and scattering function. The new scaling function can describe both good solvent and theta solvent conditions as well as resolve deviations in chain conformation due to steric interactions between star arms. The scaling model quantifies the distinction between invariant topological features for this star polymer and chain tortuosity which changes with goodness of solvent and steric interaction. Beaucage G, Phys. Rev. E 70 031401 (2004).; Ramachandran R, et al. Macromolecules 41 9802-9806 (2008).; Ramachandran R, et al. Macromolecules, 42 4746-4750 (2009); Rai DK et al. Europhys. Lett., (Submitted 10/2009).

  16. Landscape modelling at Regional to Continental scales

    Science.gov (United States)

    Kirkby, M. J.

    Most work on simulating landscape evolution has been focused at scales of about 1 Ha, there are still limitations, particularly in understanding the links between hillslope process rates and climate, soils and channel initiation. However, the need for integration with GCM outputs and with Continental Geosystems now imposes an urgent need for scaling up to Regional and Continental scales. This is reinforced by a need to incorporate estimates of soil erosion and desertification rates into national and supra-national policy. Relevant time-scales range from decadal to geological. Approaches at these regional to continental scales are critical to a fuller collaboration between geomorphologists and others interested in Continental Geosystems. Two approaches to the problem of scaling up are presented here for discussion. The first (MEDRUSH) is to embed representative hillslope flow strips into sub-catchments within a larger catchment of up to 5,000 km2. The second is to link one-dimensional models of SVAT type within DEMs at up to global scales (CSEP/SEDWEB). The MEDRUSH model is being developed as part of the EU Desertification Programme (MEDALUS project), primarily for semi-natural vegetation in southern Europe over time spans of up to 100 years. Catchments of up to 2500 km2 are divided into 50-200 sub-catchments on the basis of flow paths derived from DEMs with a horizontal resolution of 50 m or better. Within each sub-catchment a representative flow strip is selected and Hydrology, Sediment Transport and Vegetation change are simulated in detail for the flow strip, using a 1 hour time step. Changes within each flow strip are transferred back to the appropriate sub-catchment and flows of water and sediment are then routed through the channel network, generating changes in flood plain morphology.

  17. Towards dynamic genome-scale models.

    Science.gov (United States)

    Gilbert, David; Heiner, Monika; Jayaweera, Yasoda; Rohr, Christian

    2017-10-13

    The analysis of the dynamic behaviour of genome-scale models of metabolism (GEMs) currently presents considerable challenges because of the difficulties of simulating such large and complex networks. Bacterial GEMs can comprise about 5000 reactions and metabolites, and encode a huge variety of growth conditions; such models cannot be used without sophisticated tool support. This article is intended to aid modellers, both specialist and non-specialist in computerized methods, to identify and apply a suitable combination of tools for the dynamic behaviour analysis of large-scale metabolic designs. We describe a methodology and related workflow based on publicly available tools to profile and analyse whole-genome-scale biochemical models. We use an efficient approximative stochastic simulation method to overcome problems associated with the dynamic simulation of GEMs. In addition, we apply simulative model checking using temporal logic property libraries, clustering and data analysis, over time series of reaction rates and metabolite concentrations. We extend this to consider the evolution of reaction-oriented properties of subnets over time, including dead subnets and functional subsystems. This enables the generation of abstract views of the behaviour of these models, which can be large-up to whole genome in size-and therefore impractical to analyse informally by eye. We demonstrate our methodology by applying it to a reduced model of the whole-genome metabolism of Escherichia coli K-12 under different growth conditions. The overall context of our work is in the area of model-based design methods for metabolic engineering and synthetic biology. © The Author 2017. Published by Oxford University Press.

  18. Drift-Scale THC Seepage Model

    Energy Technology Data Exchange (ETDEWEB)

    C.R. Bryan

    2005-02-17

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral

  19. Scale Anchoring with the Rasch Model.

    Science.gov (United States)

    Wyse, Adam E

    Scale anchoring is a method to provide additional meaning to particular scores at different points along a score scale by identifying representative items associated with the particular scores. These items are then analyzed to write statements of what types of performance can be expected of a person with the particular scores to help test takers and other stakeholders better understand what it means to achieve the different scores. This article provides simple formulas that can be used to identify possible items to serve as scale anchors with the Rasch model. Specific attention is given to practical considerations and challenges that may be encountered when applying the formulas in different contexts. An illustrative example using data from a medical imaging certification program demonstrates how the formulas can be applied in practice.

  20. Genome scale metabolic modeling of cancer

    DEFF Research Database (Denmark)

    Nilsson, Avlant; Nielsen, Jens

    2017-01-01

    Cancer cells reprogram metabolism to support rapid proliferation and survival. Energy metabolism is particularly important for growth and genes encoding enzymes involved in energy metabolism are frequently altered in cancer cells. A genome scale metabolic model (GEM) is a mathematical formalization...... of metabolism which allows simulation and hypotheses testing of metabolic strategies. It has successfully been applied to many microorganisms and is now used to study cancer metabolism. Generic models of human metabolism have been reconstructed based on the existence of metabolic genes in the human genome....... Cancer specific models of metabolism have also been generated by reducing the number of reactions in the generic model based on high throughput expression data, e.g. transcriptomics and proteomics. Targets for drugs and bio markers for diagnostics have been identified using these models. They have also...

  1. Pore-Scale Model for Microbial Growth

    Science.gov (United States)

    Tartakovsky, G.; Tartakovsky, A. M.; Scheibe, T. D.

    2011-12-01

    A lagrangian particle model based on smoothed particle hydrodynamics (SPH) is used to simulate pore-scale flow, reactive transport and biomass growth which is controlled by the mixing of an electron donor and acceptor, in a microfluidic porous cell. The experimental results described in Ch. Zhang et al "Effects of pore-scale heterogeneity and transverse mixing on bacterial growth in porous media" were used for this study. The model represents the homogeneous pore structure of a uniform array of cylindrical posts with microbes uniformly distributed on the grain surfaces. Each one of the two solutes (electron donor and electron acceptor) enters the domain unmixed through separate inlets. In the model, pair-wise particle-particle interactions are used to simulate interactions within the biomass, and both biomass-fluid and biomass-soil grain interactions. The biomass growth rate is described by double Monod kinetics. For the set of parameters used in the simulations the model predicts that: 1) biomass grows in the shape of bridges connecting soil grains and oriented in the direction of flow so as to minimize resistance to the fluid flow; and 2) the biomass growth occurs only in the mixing zone. Using parameters available in the literature, the biomass growth model agrees qualitatively with the experimental results. In order to achieve quantitative agreement, model calibration is required.

  2. Anomalous scalings in differential models of turbulence

    CERN Document Server

    Thalabard, Simon; Galtier, Sebastien; Sergey, Medvedev

    2015-01-01

    Differential models for hydrodynamic, passive-scalar and wave turbulence given by nonlinear first- and second-order evolution equations for the energy spectrum in the $k$-space were analysed. Both types of models predict formation an anomalous transient power-law spectra. The second-order models were analysed in terms of self-similar solutions of the second kind, and a phenomenological formula for the anomalous spectrum exponent was constructed using numerics for a broad range of parameters covering all known physical examples. The first-order models were examined analytically, including finding an analytical prediction for the anomalous exponent of the transient spectrum and description of formation of the Kolmogorov-type spectrum as a reflection wave from the dissipative scale back into the inertial range. The latter behaviour was linked to pre-shock/shock singularities similar to the ones arising in the Burgers equation. Existence of the transient anomalous scaling and the reflection-wave scenario are argu...

  3. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    behavior and the trapped free energy in the material, in addition to the plastic behavior in terms of the anisotropic development of the yield surface. It is shown that a generalization of Hill’s anisotropic yield criterion can be used to model the Bauschinger effect, in addition to the pressure and size...... is analyzed using a Representative Volume Element (RVE), while the homogenized data are saved and used as an input to the macro scale. The dependence of fiber size is analyzed using a higher order plasticity theory, where the free energy is stored due to plastic strain gradients at the micron scale. Hill...... dependence. The development of the macroscopic yield surface upon deformation is investigated in terms of the anisotropic hardening (expansion of the yield surface) and kinematic hardening (translation of the yield surface). The kinematic hardening law is based on trapped free energy in the material due...

  4. Multi-scale modelling and dynamics

    Science.gov (United States)

    Müller-Plathe, Florian

    Moving from a fine-grained particle model to one of lower resolution leads, with few exceptions, to an acceleration of molecular mobility, higher diffusion coefficient, lower viscosities and more. On top of that, the level of acceleration is often different for different dynamical processes as well as for different state points. While the reasons are often understood, the fact that coarse-graining almost necessarily introduces unpredictable acceleration of the molecular dynamics severely limits its usefulness as a predictive tool. There are several attempts under way to remedy these shortcoming of coarse-grained models. On the one hand, we follow bottom-up approaches. They attempt already when the coarse-graining scheme is conceived to estimate their impact on the dynamics. This is done by excess-entropy scaling. On the other hand, we also pursue a top-down development. Here we start with a very coarse-grained model (dissipative particle dynamics) which in its native form produces qualitatively wrong polymer dynamics, as its molecules cannot entangle. This model is modified by additional temporary bonds, so-called slip springs, to repair this defect. As a result, polymer melts and solutions described by the slip-spring DPD model show correct dynamical behaviour. Read more: ``Excess entropy scaling for the segmental and global dynamics of polyethylene melts'', E. Voyiatzis, F. Müller-Plathe, and M.C. Böhm, Phys. Chem. Chem. Phys. 16, 24301-24311 (2014). [DOI: 10.1039/C4CP03559C] ``Recovering the Reptation Dynamics of Polymer Melts in Dissipative Particle Dynamics Simulations via Slip-Springs'', M. Langeloth, Y. Masubuchi, M. C. Böhm, and F. Müller-Plathe, J. Chem. Phys. 138, 104907 (2013). [DOI: 10.1063/1.4794156].

  5. Modelling landscape evolution at the flume scale

    Science.gov (United States)

    Cheraghi, Mohsen; Rinaldo, Andrea; Sander, Graham C.; Barry, D. Andrew

    2017-04-01

    The ability of a large-scale Landscape Evolution Model (LEM) to simulate the soil surface morphological evolution as observed in a laboratory flume (1-m × 2-m surface area) was investigated. The soil surface was initially smooth, and was subjected to heterogeneous rainfall in an experiment designed to avoid rill formation. Low-cohesive fine sand was placed in the flume while the slope and relief height were 5 % and 20 cm, respectively. Non-uniform rainfall with an average intensity of 85 mm h-1 and a standard deviation of 26 % was applied to the sediment surface for 16 h. We hypothesized that the complex overland water flow can be represented by a drainage discharge network, which was calculated via the micro-morphology and the rainfall distribution. Measurements included high resolution Digital Elevation Models that were captured at intervals during the experiment. The calibrated LEM captured the migration of the main flow path from the low precipitation area into the high precipitation area. Furthermore, both model and experiment showed a steep transition zone in soil elevation that moved upstream during the experiment. We conclude that the LEM is applicable under non-uniform rainfall and in the absence of surface incisions, thereby extending its applicability beyond that shown in previous applications. Keywords: Numerical simulation, Flume experiment, Particle Swarm Optimization, Sediment transport, River network evolution model.

  6. Scaling in a Multispecies Network Model Ecosystem

    CERN Document Server

    Solé, R V; McKane, A; Sole, Ricard V.; Alonso, David; Kane, Alan Mc

    1999-01-01

    A new model ecosystem consisting of many interacting species is introduced. The species are connected through a random matrix with a given connectivity. It is shown that the system is organized close to a boundary of marginal stability in such a way that fluctuations follow power law distributions both in species abundance and their lifetimes for some slow-driving (immigration) regime. The connectivity and the number of species are linked through a scaling relation which is the one observed in real ecosystems. These results suggest that the basic macroscopic features of real, species-rich ecologies might be linked with a critical state. A natural link between lognormal and power law distributions of species abundances is suggested.

  7. Global-scale modeling of groundwater recharge

    Science.gov (United States)

    Döll, P.; Fiedler, K.

    2008-05-01

    Long-term average groundwater recharge, which is equivalent to renewable groundwater resources, is the major limiting factor for the sustainable use of groundwater. Compared to surface water resources, groundwater resources are more protected from pollution, and their use is less restricted by seasonal and inter-annual flow variations. To support water management in a globalized world, it is necessary to estimate groundwater recharge at the global scale. Here, we present a best estimate of global-scale long-term average diffuse groundwater recharge (i.e. renewable groundwater resources) that has been calculated by the most recent version of the WaterGAP Global Hydrology Model WGHM (spatial resolution of 0.5° by 0.5°, daily time steps). The estimate was obtained using two state-of-the-art global data sets of gridded observed precipitation that we corrected for measurement errors, which also allowed to quantify the uncertainty due to these equally uncertain data sets. The standard WGHM groundwater recharge algorithm was modified for semi-arid and arid regions, based on independent estimates of diffuse groundwater recharge, which lead to an unbiased estimation of groundwater recharge in these regions. WGHM was tuned against observed long-term average river discharge at 1235 gauging stations by adjusting, individually for each basin, the partitioning of precipitation into evapotranspiration and total runoff. We estimate that global groundwater recharge was 12 666 km3/yr for the climate normal 1961-1990, i.e. 32% of total renewable water resources. In semi-arid and arid regions, mountainous regions, permafrost regions and in the Asian Monsoon region, groundwater recharge accounts for a lower fraction of total runoff, which makes these regions particularly vulnerable to seasonal and inter-annual precipitation variability and water pollution. Average per-capita renewable groundwater resources of countries vary between 8 m3/(capita yr) for Egypt to more than 1 million m3

  8. Global-scale modeling of groundwater recharge

    Science.gov (United States)

    Döll, P.; Fiedler, K.

    2007-11-01

    Long-term average groundwater recharge, which is equivalent to renewable groundwater resources, is the major limiting factor for the sustainable use of groundwater. Compared to surface water resources, groundwater resources are more protected from pollution, and their use is less restricted by seasonal and inter-annual flow variations. To support water management in a globalized world, it is necessary to estimate groundwater recharge at the global scale. Here, we present a best estimate of global-scale long-term average diffuse groundwater recharge (i.e. renewable groundwater resources) that has been calculated by the most recent version of the WaterGAP Global Hydrology Model WGHM (spatial resolution of 0.5° by 0.5°, daily time steps). The estimate was obtained using two state-of-the art global data sets of gridded observed precipitation that we corrected for measurement errors, which also allowed to quantify the uncertainty due to these equally uncertain data sets. The standard WGHM groundwater recharge algorithm was modified for semi-arid and arid regions, based on independent estimates of diffuse groundwater recharge, which lead to an unbiased estimation of groundwater recharge in these regions. WGHM was tuned against observed long-term average river discharge at 1235 gauging stations by adjusting, individually for each basin, the partitioning of precipitation into evapotranspiration and total runoff. We estimate that global groundwater recharge was 12 666 km3/yr for the climate normal 1961-1990, i.e. 32% of total renewable water resources. In semi-arid and arid regions, mountainous regions, permafrost regions and in the Asian Monsoon region, groundwater recharge accounts for a lower fraction of total runoff, which makes these regions particularly vulnerable to seasonal and inter-annual precipitation variability and water pollution. Average per-capita renewable groundwater resources of countries vary between 8 m3/(capita yr) for Egypt to more than 1 million m3

  9. Global-scale modeling of groundwater recharge

    Directory of Open Access Journals (Sweden)

    P. Döll

    2008-05-01

    Full Text Available Long-term average groundwater recharge, which is equivalent to renewable groundwater resources, is the major limiting factor for the sustainable use of groundwater. Compared to surface water resources, groundwater resources are more protected from pollution, and their use is less restricted by seasonal and inter-annual flow variations. To support water management in a globalized world, it is necessary to estimate groundwater recharge at the global scale. Here, we present a best estimate of global-scale long-term average diffuse groundwater recharge (i.e. renewable groundwater resources that has been calculated by the most recent version of the WaterGAP Global Hydrology Model WGHM (spatial resolution of 0.5° by 0.5°, daily time steps. The estimate was obtained using two state-of-the-art global data sets of gridded observed precipitation that we corrected for measurement errors, which also allowed to quantify the uncertainty due to these equally uncertain data sets. The standard WGHM groundwater recharge algorithm was modified for semi-arid and arid regions, based on independent estimates of diffuse groundwater recharge, which lead to an unbiased estimation of groundwater recharge in these regions. WGHM was tuned against observed long-term average river discharge at 1235 gauging stations by adjusting, individually for each basin, the partitioning of precipitation into evapotranspiration and total runoff. We estimate that global groundwater recharge was 12 666 km3/yr for the climate normal 1961–1990, i.e. 32% of total renewable water resources. In semi-arid and arid regions, mountainous regions, permafrost regions and in the Asian Monsoon region, groundwater recharge accounts for a lower fraction of total runoff, which makes these regions particularly vulnerable to seasonal and inter-annual precipitation variability and water pollution. Average per-capita renewable groundwater resources of countries vary between 8 m3

  10. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    -liquid equilibrium of sulphate scaling minerals (SrSO4, BaSO4, CaSO4 and CaSO4•2H2O) at temperatures up to 300ºC and pressures up to 1000 bar is described in chapter 4. Results for the binary systems (M2+, )-H2O; the ternary systems (Na+, M2+, )-H2O, and (Na+, M2+, Cl-)-H2O; and the quaternary systems (Na+, M2+)(Cl...... to 1000 bar. The solubility of CO2 in pure water, and the solubility of CO2 in solutions of different salts (NaCl and Na2SO4) have also been correlated. Results for the binary systems MCO3-H2O, and CO2-H2O; the ternary systems MCO3-CO2-H2O, CO2-NaCl-H2O, and CO2-Na2SO4-H2O; and the quaternary system CO2....... Chapter 2 is focused on thermodynamics of the systems studied and on the calculation of vapour-liquid, solid-liquid, and speciation equilibria. The effects of both temperature and pressure on the solubility are addressed, and explanation of the model calculations is also given. Chapter 3 presents...

  11. Multi-scale models for cell adhesion

    Science.gov (United States)

    Wu, Yinghao; Chen, Jiawen; Xie, Zhong-Ru

    2014-03-01

    The interactions of membrane receptors during cell adhesion play pivotal roles in tissue morphogenesis during development. Our lab focuses on developing multi-scale models to decompose the mechanical and chemical complexity in cell adhesion. Recent experimental evidences show that clustering is a generic process for cell adhesive receptors. However, the physical basis of such receptor clustering is not understood. We introduced the effect of molecular flexibility to evaluate the dynamics of receptors. By delivering new theory to quantify the changes of binding free energy in different cellular environments, we revealed that restriction of molecular flexibility upon binding of membrane receptors from apposing cell surfaces (trans) causes large entropy loss, which dramatically increases their lateral interactions (cis). This provides a new molecular mechanism to initialize receptor clustering on the cell-cell interface. By using the subcellular simulations, we further found that clustering is a cooperative process requiring both trans and cis interactions. The detailed binding constants during these processes are calculated and compared with experimental data from our collaborator's lab.

  12. Modeling cancer metabolism on a genome scale

    Science.gov (United States)

    Yizhak, Keren; Chaneton, Barbara; Gottlieb, Eyal; Ruppin, Eytan

    2015-01-01

    Cancer cells have fundamentally altered cellular metabolism that is associated with their tumorigenicity and malignancy. In addition to the widely studied Warburg effect, several new key metabolic alterations in cancer have been established over the last decade, leading to the recognition that altered tumor metabolism is one of the hallmarks of cancer. Deciphering the full scope and functional implications of the dysregulated metabolism in cancer requires both the advancement of a variety of omics measurements and the advancement of computational approaches for the analysis and contextualization of the accumulated data. Encouragingly, while the metabolic network is highly interconnected and complex, it is at the same time probably the best characterized cellular network. Following, this review discusses the challenges that genome-scale modeling of cancer metabolism has been facing. We survey several recent studies demonstrating the first strides that have been done, testifying to the value of this approach in portraying a network-level view of the cancer metabolism and in identifying novel drug targets and biomarkers. Finally, we outline a few new steps that may further advance this field. PMID:26130389

  13. Downscaling modelling system for multi-scale air quality forecasting

    Science.gov (United States)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a

  14. Upscaling a catchment-scale ecohydrology model for regional-scale earth system modeling

    Science.gov (United States)

    Adam, J. C.; Tague, C.; Liu, M.; Garcia, E.; Choate, J.; Mullis, T.; Hull, R.; Vaughan, J. K.; Kalyanaraman, A.; Nguyen, T.

    2014-12-01

    With a focus on the U.S. Pacific Northwest (PNW), BioEarth is an Earth System Model (EaSM) currently in development that explores the interactions between coupled C:N:H2O dynamics and resource management actions at the regional scale. Capturing coupled biogeochemical processes within EaSMs like BioEarth is important for exploring the response of the land surface to changes in climate and resource management actions; information that is important for shaping decisions that promote sustainable use of our natural resources. However, many EaSM frameworks do not adequately represent landscape-scale ( 10 km) are necessitated by computational limitations. Spatial heterogeneity in a landscape arises due to spatial differences in underlying soil and vegetation properties that control moisture, energy and nutrient fluxes; as well as differences that arise due to spatially-organized connections that may drive an ecohydrologic response by the land surface. While many land surface models used in EaSM frameworks capture the first type of heterogeneity, few account for the influence of lateral connectivity on land surface processes. This type of connectivity can be important when considering soil moisture and nutrient redistribution. The RHESSys model is utilized by BioEarth to enable a "bottom-up" approach that preserves fine spatial-scale sensitivities and lateral connectivity that may be important for coupled C:N:H2O dynamics over larger scales. RHESSys is a distributed eco-hydrologic model that was originally developed to run at relatively fine but computationally intensive spatial resolutions over small catchments. The objective of this presentation is to describe two developments to enable implementation of RHESSys over the PNW. 1) RHESSys is being adapted for BioEarth to allow for moderately coarser resolutions and the flexibility to capture both types of heterogeneity at biome-specific spatial scales. 2) A Kepler workflow is utilized to enable RHESSys implementation over

  15. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...... that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what...... are the limitations of different types of mod - els? This paper will provide examples of models that have been published in the literature for use across bioreactor scales, including computational fluid dynamics (CFD) and population balance models. Furthermore, the importance of good modeling practice...

  16. Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model

    Science.gov (United States)

    Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh

    2014-01-01

    This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.

  17. A New Method of Building Scale-Model Houses

    Science.gov (United States)

    Richard N. Malcolm

    1978-01-01

    Scale-model houses are used to display new architectural and construction designs.Some scale-model houses will not withstand the abuse of shipping and handling.This report describes how to build a solid-core model house which is rigid, lightweight, and sturdy.

  18. Gauge coupling unification in a classically scale invariant model

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki; Ishida, Hiroyuki [Graduate School of Science and Engineering, Shimane University,Matsue 690-8504 (Japan); Takahashi, Ryo [Graduate School of Science, Tohoku University,Sendai, 980-8578 (Japan); Yamaguchi, Yuya [Graduate School of Science and Engineering, Shimane University,Matsue 690-8504 (Japan); Department of Physics, Faculty of Science, Hokkaido University,Sapporo 060-0810 (Japan)

    2016-02-08

    There are a lot of works within a class of classically scale invariant model, which is motivated by solving the gauge hierarchy problem. In this context, the Higgs mass vanishes at the UV scale due to the classically scale invariance, and is generated via the Coleman-Weinberg mechanism. Since the mass generation should occur not so far from the electroweak scale, we extend the standard model only around the TeV scale. We construct a model which can achieve the gauge coupling unification at the UV scale. In the same way, the model can realize the vacuum stability, smallness of active neutrino masses, baryon asymmetry of the universe, and dark matter relic abundance. The model predicts the existence vector-like fermions charged under SU(3){sub C} with masses lower than 1 TeV, and the SM singlet Majorana dark matter with mass lower than 2.6 TeV.

  19. Holography for chiral scale-invariant models

    NARCIS (Netherlands)

    Caldeira Costa, R.N.; Taylor, M.

    2011-01-01

    Deformation of any d-dimensional conformal field theory by a constant null source for a vector operator of dimension (d + z -1) is exactly marginal with respect to anisotropic scale invariance, of dynamical exponent z. The holographic duals to such deformations are AdS plane waves, with z=2 being

  20. Holography for chiral scale-invariant models

    NARCIS (Netherlands)

    Caldeira Costa, R.N.; Taylor, M.

    2010-01-01

    Deformation of any d-dimensional conformal field theory by a constant null source for a vector operator of dimension (d + z -1) is exactly marginal with respect to anisotropic scale invariance, of dynamical exponent z. The holographic duals to such deformations are AdS plane waves, with z=2 being

  1. The sense and non-sense of plot-scale, catchment-scale, continental-scale and global-scale hydrological modelling

    Science.gov (United States)

    Bronstert, Axel; Heistermann, Maik; Francke, Till

    2017-04-01

    Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on

  2. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  3. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data. This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  4. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  5. Simple subgrid scale stresses models for homogeneous isotropic turbulence

    Science.gov (United States)

    Aupoix, B.; Cousteix, J.

    Large eddy simulations employing the filtering of Navier-Stokes equations highlight stresses, related to the interaction between large scales below the cut and small scales above it, which have been designated 'subgrid scale stresses'. Their effects include both the energy flux through the cut and a component of viscous diffusion. The eddy viscosity introduced in the subgrid scale models which give the correct energy flux through the cut by comparison with spectral closures is shown to depend only on the small scales. The Smagorinsky (1963) model can only be obtained if the cut lies in the middle of the inertial range. A novel model which takes the small scales into account statistically, and includes the effects of viscosity, is proposed and compared with classical models for the Comte-Bellot and Corrsin (1971) experiment.

  6. On nano-scale hydrodynamic lubrication models

    Science.gov (United States)

    Buscaglia, Gustavo; Ciuperca, Ionel S.; Jai, Mohammed

    2005-06-01

    Current magnetic head sliders and other micromechanisms involve gas lubrication flows with gap thicknesses in the nanometer range and stepped shapes fabricated by lithographic methods. In mechanical simulations, rarefaction effects are accounted for by models that propose Poiseuille flow factors which exhibit singularities as the pressure tends to zero or +∞. In this Note we show that these models are indeed mathematically well-posed, even in the case of discontinuous gap thickness functions. Our results cover popular models that were not previously analyzed in the literature, such as the Fukui-Kaneko model and the second-order model, among others. To cite this article: G. Buscaglia et al., C. R. Mecanique 333 (2005).

  7. Optimal Scaling of Interaction Effects in Generalized Linear Models

    Science.gov (United States)

    van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.

    2009-01-01

    Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…

  8. Multiple-scale turbulence model in confined swirling jet predictions

    Science.gov (United States)

    Chen, C. P.

    1986-01-01

    A recently developed multiple-scale turbulence model which attempts to circumvent the deficiencies of earlier models by taking nonequilibrium spectral energy transfer into account is presented. The model's validity is tested by predicting the confined swirling coaxial jet flow in a sudden expansion. It is noted that, in order to account for anisotropic turbulence, a full Reynolds stress model is required.

  9. Continental scale modelling of geomagnetically induced currents

    OpenAIRE

    Sakharov Yaroslav; Prácser Ernö; Ádám Antal; Wik Magnus; Pirjola Risto; Viljanen Ari; Katkalov Juri

    2012-01-01

    The EURISGIC project (European Risk from Geomagnetically Induced Currents) aims at deriving statistics of geomagnetically induced currents (GIC) in the European high-voltage power grids. Such a continent-wide system of more than 1500 substations and transmission lines requires updates of the previous modelling, which has dealt with national grids in fairly small geographic areas. We present here how GIC modelling can be conveniently performed on a spherical surface with minor changes in the p...

  10. Multi-scale modeling for sustainable chemical production.

    Science.gov (United States)

    Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J

    2013-09-01

    With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Scaling of musculoskeletal models from static and dynamic trials

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark

    2015-01-01

    Subject-specific scaling of cadaver-based musculoskeletal models is important for accurate musculoskeletal analysis within multiple areas such as ergonomics, orthopaedics and occupational health. We present two procedures to scale ‘generic’ musculoskeletal models to match segment lengths and joint...... parameters to a specific subject and compare the results to a simpler approach based on linear, segment-wise scaling. By incorporating data from functional and standing reference trials, the new scaling approaches reduce the model sensitivity to assumed model marker positions. For validation, we applied all....... The presented methods solve part of this problem and rely less on manual identification of anatomical landmarks in the model. The work represents a step towards a more consistent methodology in musculoskeletal modelling....

  12. Exploring nonlinear subgrid-scale models and new characteristic length scales for large-eddy simulation

    NARCIS (Netherlands)

    Silvis, Maurits H.; Trias, F. Xavier; Abkar, M.; Bae, H.J.; Lozano-Duran, A.; Verstappen, R.W.C.P.; Moin, Parviz; Urzay, Javier

    2016-01-01

    We study subgrid-scale modeling for large-eddy simulation of anisotropic turbulent flows on anisotropic grids. In particular, we show how the addition of a velocity-gradient-based nonlinear model term to an eddy viscosity model provides a better representation of energy transfer. This is shown to

  13. Multi-scale modeling for sustainable chemical production

    DEFF Research Database (Denmark)

    Zhuang, Kai; Bakshi, Bhavik R.; Herrgard, Markus

    2013-01-01

    associated with the development and implementation of a su stainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow......, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations...... and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production....

  14. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...... of experimental task (i.e., real-time vs. annotated segmentation), nor of musicianship on boundary perception are clear. Our study assesses musicianship effects and differences between segmentation tasks. We conducted a real-time experiment to collect segmentations by musicians and nonmusicians from nine musical...... indication density, although this might be contingent on stimuli and other factors. In line with other studies, no musicianship effects were found: our results showed high agreement between groups and similar inter-subject correlations. Also consistent with previous work, time scales between one and two...

  15. Continental scale modelling of geomagnetically induced currents

    Directory of Open Access Journals (Sweden)

    Sakharov Yaroslav

    2012-09-01

    Full Text Available The EURISGIC project (European Risk from Geomagnetically Induced Currents aims at deriving statistics of geomagnetically induced currents (GIC in the European high-voltage power grids. Such a continent-wide system of more than 1500 substations and transmission lines requires updates of the previous modelling, which has dealt with national grids in fairly small geographic areas. We present here how GIC modelling can be conveniently performed on a spherical surface with minor changes in the previous technique. We derive the exact formulation to calculate geovoltages on the surface of a sphere and show its practical approximation in a fast vectorised form. Using the model of the old Finnish power grid and a much larger prototype model of European high-voltage power grids, we validate the new technique by comparing it to the old one. We also compare model results to measured data in the following cases: geoelectric field at the Nagycenk observatory, Hungary; GIC at a Russian transformer; GIC along the Finnish natural gas pipeline. In all cases, the new method works reasonably well.

  16. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  17. Flavor Gauge Models Below the Fermi Scale

    Energy Technology Data Exchange (ETDEWEB)

    Babu, K. S. [Oklahoma State U.; Friedland, A. [SLAC; Machado, P. A.N. [Madrid, IFT; Mocioiu, I. [Penn State U.

    2017-05-04

    The mass and weak interaction eigenstates for the quarks of the third generation are very well aligned, an empirical fact for which the Standard Model offers no explanation. We explore the possibility that this alignment is due to an additional gauge symmetry in the third generation. Specifically, we construct and analyze an explicit, renormalizable model with a gauge boson, $X$, corresponding to the $B-L$ symmetry of the third family. Having a relatively light (in the MeV to multi-GeV range), flavor-nonuniversal gauge boson results in a variety of constraints from different sources. By systematically analyzing 20 different constraints, we identify the most sensitive probes: kaon, $D^+$ and Upsilon decays, $D-\\bar{D}^0$ mixing, atomic parity violation, and neutrino scattering and oscillations. For the new gauge coupling $g_X$ in the range $(10^{-2} - 10^{-4})$ the model is shown to be consistent with the data. Possible ways of testing the model in $b$ physics, top and $Z$ decays, direct collider production and neutrino oscillation experiments, where one can observe nonstandard matter effects, are outlined. The choice of leptons to carry the new force is ambiguous, resulting in additional phenomenological implications, such as non-universality in semileptonic bottom decays. The proposed framework provides interesting connections between neutrino oscillations, flavor and collider physics.

  18. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Y.S. Wu

    2005-08-24

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on

  19. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  20. Modeling Human Behavior at a Large Scale

    Science.gov (United States)

    2012-01-01

    impacts its recognition performance for both activities. The example we just gave illustrates one type of freeing false positives. The hallucinated freeings... vision have worked on the problem of recognizing events in videos of sporting events, such as impressive recent work on learning models of baseball plays...data can only be disambiguated by considering arbitrarily long temporal sequences. In general, however, both our work 65 and that in machine vision

  1. Anomalous scaling in an age-dependent branching model

    OpenAIRE

    Keller-Schmidt, Stephanie; Tugrul, Murat; Eguíluz, Víctor M.; Hernández-García, Emilio; Klemm, Konstantin

    2010-01-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age $\\tau$ as $\\tau^{-\\alpha}$. Depending on the exponent $\\alpha$, the scaling of tree depth with tree size $n$ displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition ($\\alpha=1$) tree depth grows as $(\\log n)^2$. This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus p...

  2. Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications

    DEFF Research Database (Denmark)

    Liu, Yubao; Warner, Tom; Liu, Yuewei

    2011-01-01

    This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center...... grids and seamlessly providing realistic mesoscale weather forcing to drive a large eddy simulation (LES) model within the WRF framework. The WRF based RTFDDA LES modeling capability is referred to as WRF–RTFDDA–LES. In this study, WRF–RTFDDA–LES is employed to simulate real weather in a major wind farm...... located in northern Colorado with six nested domains. The grid sizes of the nested domains are 30, 10, 3.3, 1.1, 0.370 and 0.123 km, respectively. The model results are compared with wind–farm anemometer measurements and are found to capture many intra-farm wind features and microscale flows. Additional...

  3. Fractal Modeling and Scaling in Natural Systems - Editorial

    Science.gov (United States)

    The special issue of Ecological complexity journal on Fractal Modeling and Scaling in Natural Systems contains representative examples of the status and evolution of data-driven research into fractals and scaling in complex natural systems. The editorial discusses contributions to understanding rela...

  4. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    ... time scales involved in determining macroscopic properties has been attempted by several workers with varying degrees of success. This paper will review the recently developed quasicontinuum method which is an attempt to bridge the length scales in a single seamless model with the aid of the finite element method.

  5. Scaling Properties of a Hybrid Fermi-Ulam-Bouncer Model

    Directory of Open Access Journals (Sweden)

    Diego F. M. Oliveira

    2009-01-01

    under the framework of scaling description. The model is described by using a two-dimensional nonlinear area preserving mapping. Our results show that the chaotic regime below the lowest energy invariant spanning curve is scaling invariant and the obtained critical exponents are used to find a universal plot for the second momenta of the average velocity.

  6. Ares I Scale Model Acoustic Test Lift-Off Acoustics

    Science.gov (United States)

    Counter, Douglas D.; Houston, Janie D.

    2011-01-01

    The lift-off acoustic (LOA) environment is an important design factor for any launch vehicle. For the Ares I vehicle, the LOA environments were derived by scaling flight data from other launch vehicles. The Ares I LOA predicted environments are compared to the Ares I Scale Model Acoustic Test (ASMAT) preliminary results.

  7. Advances in Modelling of Large Scale Coastal Evolution

    NARCIS (Netherlands)

    Stive, M.J.F.; De Vriend, H.J.

    1995-01-01

    The attention for climate change impact on the world's coastlines has established large scale coastal evolution as a topic of wide interest. Some more recent advances in this field, focusing on the potential of mathematical models for the prediction of large scale coastal evolution, are discussed.

  8. Visualization and modeling of smoke transport over landscape scales

    Science.gov (United States)

    Glenn P. Forney; William Mell

    2007-01-01

    Computational tools have been developed at the National Institute of Standards and Technology (NIST) for modeling fire spread and smoke transport. These tools have been adapted to address fire scenarios that occur in the wildland urban interface (WUI) over kilometer-scale distances. These models include the smoke plume transport model ALOFT (A Large Open Fire plume...

  9. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  10. Meso-scale modeling of a forested landscape

    DEFF Research Database (Denmark)

    Dellwik, Ebba; Arnqvist, Johan; Bergström, Hans

    2014-01-01

    Meso-scale models are increasingly used for estimating wind resources for wind turbine siting. In this study, we investigate how the Weather Research and Forecasting (WRF) model performs using standard model settings in two different planetary boundary layer schemes for a forested landscape and how...

  11. Genome-scale modeling for metabolic engineering.

    Science.gov (United States)

    Simeonidis, Evangelos; Price, Nathan D

    2015-03-01

    We focus on the application of constraint-based methodologies and, more specifically, flux balance analysis in the field of metabolic engineering, and enumerate recent developments and successes of the field. We also review computational frameworks that have been developed with the express purpose of automatically selecting optimal gene deletions for achieving improved production of a chemical of interest. The application of flux balance analysis methods in rational metabolic engineering requires a metabolic network reconstruction and a corresponding in silico metabolic model for the microorganism in question. For this reason, we additionally present a brief overview of automated reconstruction techniques. Finally, we emphasize the importance of integrating metabolic networks with regulatory information-an area which we expect will become increasingly important for metabolic engineering-and present recent developments in the field of metabolic and regulatory integration.

  12. Modelling of evapotranspiration at field and landscape scales. Abstract

    DEFF Research Database (Denmark)

    Overgaard, Jesper; Butts, M.B.; Rosbjerg, Dan

    2002-01-01

    The overall aim of this project is to couple a non-hydrostatic atmospheric model (ARPS) to an integrated hydrological model (MIKE SHE) to investigate atmospheric and hydrological feedbacks at different scales. To ensure a consistent coupling a new land-surface component based on a modified...... Shuttleworth-Wallace scheme was implemented in MIKE SHE. To validate the new land-surface component at different scales, the hydrological model was applied to an intensively monitored 10 km2 agricultural area in Denmark with a resolution of 40 meter. The model is forced with half-hourly metorological...... observations from a nearby weather station. Detailed land-use and soil maps were used to set up the model. Leaf area index was derived from NDVI (Normalized Difference Vegetation Index) images. To validate the model at field scale the simulated evapotranspiration rates were compared to eddy...

  13. Multi-Scale Computational Models for Electrical Brain Stimulation

    Science.gov (United States)

    Seo, Hyeon; Jun, Sung C.

    2017-01-01

    Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476

  14. Predictions of a model of weak scale from dynamical breaking of scale invariance

    Directory of Open Access Journals (Sweden)

    Giulio Maria Pelaggi

    2015-04-01

    Full Text Available We consider a model where the weak and the DM scale arise at one loop from the Coleman–Weinberg mechanism. We perform a precision computation of the model predictions for the production cross section of a new Higgs-like scalar and for the direct-detection cross section of the DM particle candidate.

  15. Measurement of returns to scale in radial DEA models

    Science.gov (United States)

    Krivonozhko, V. E.; Lychev, A. V.; Førsund, F. R.

    2017-01-01

    A general approach is proposed in order to measure returns to scale and scale elasticity at projections points in the radial data envelopment analysis (DEA) models. In the first stage, a relative interior point belonging to the optimal face is found using a special, elaborated method. In previous work it was proved that any relative interior point of a face has the same returns to scale as any other interior point of this face. In the second stage, we propose to determine the returns to scale at the relative interior point found in the first stage.

  16. Phenomenological Aspects of No-Scale Inflation Models

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V; Olive, Keith A

    2015-01-01

    We discuss phenomenological aspects of no-scale supergravity inflationary models motivated by compactified string models, in which the inflaton may be identified either as a K\\"ahler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index $n_s$ and the tensor-to-scalar ratio $r$ that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type $m_0 = B_0 = A_0 = 0$, of the CMSSM type with universal $A_0$ and $m_0 \

  17. [Modeling continuous scaling of NDVI based on fractal theory].

    Science.gov (United States)

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  18. Ecohydrological modeling for large-scale environmental impact assessment.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. FINAL REPORT: Mechanistically-Base Field Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Brian D. [Oregon State Univ., Corvallis, OR (United States)

    2013-11-04

    Biogeochemical reactive transport processes in the subsurface environment are important to many contemporary environmental issues of significance to DOE. Quantification of risks and impacts associated with environmental management options, and design of remediation systems where needed, require that we have at our disposal reliable predictive tools (usually in the form of numerical simulation models). However, it is well known that even the most sophisticated reactive transport models available today have poor predictive power, particularly when applied at the field scale. Although the lack of predictive ability is associated in part with our inability to characterize the subsurface and limitations in computational power, significant advances have been made in both of these areas in recent decades and can be expected to continue. In this research, we examined the upscaling (pore to Darcy and Darcy to field) the problem of bioremediation via biofilms in porous media. The principle idea was to start with a conceptual description of the bioremediation process at the pore scale, and apply upscaling methods to formally develop the appropriate upscaled model at the so-called Darcy scale. The purpose was to determine (1) what forms the upscaled models would take, and (2) how one might parameterize such upscaled models for applications to bioremediation in the field. We were able to effectively upscale the bioremediation process to explain how the pore-scale phenomena were linked to the field scale. The end product of this research was to produce a set of upscaled models that could be used to help predict field-scale bioremediation. These models were mechanistic, in the sense that they directly incorporated pore-scale information, but upscaled so that only the essential features of the process were needed to predict the effective parameters that appear in the model. In this way, a direct link between the microscale and the field scale was made, but the upscaling process

  20. Cancer systems biology and modeling: microscopic scale and multiscale approaches.

    Science.gov (United States)

    Masoudi-Nejad, Ali; Bidkhori, Gholamreza; Hosseini Ashtiani, Saman; Najafi, Ali; Bozorgmehr, Joseph H; Wang, Edwin

    2015-02-01

    Cancer has become known as a complex and systematic disease on macroscopic, mesoscopic and microscopic scales. Systems biology employs state-of-the-art computational theories and high-throughput experimental data to model and simulate complex biological procedures such as cancer, which involves genetic and epigenetic, in addition to intracellular and extracellular complex interaction networks. In this paper, different systems biology modeling techniques such as systems of differential equations, stochastic methods, Boolean networks, Petri nets, cellular automata methods and agent-based systems are concisely discussed. We have compared the mentioned formalisms and tried to address the span of applicability they can bear on emerging cancer modeling and simulation approaches. Different scales of cancer modeling, namely, microscopic, mesoscopic and macroscopic scales are explained followed by an illustration of angiogenesis in microscopic scale of the cancer modeling. Then, the modeling of cancer cell proliferation and survival are examined on a microscopic scale and the modeling of multiscale tumor growth is explained along with its advantages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Nucleon electric dipole moments in high-scale supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Hisano, Junji [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI),Nagoya University,Nagoya 464-8602 (Japan); Department of Physics, Nagoya University,Nagoya 464-8602 (Japan); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8584 (Japan); Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi [Department of Physics, Nagoya University,Nagoya 464-8602 (Japan)

    2015-11-12

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  2. Standard model with spontaneously broken quantum scale invariance

    Science.gov (United States)

    Ghilencea, D. M.; Lalak, Z.; Olszewski, P.

    2017-09-01

    We explore the possibility that scale symmetry is a quantum symmetry that is broken only spontaneously and apply this idea to the standard model. We compute the quantum corrections to the potential of the Higgs field (ϕ ) in the classically scale-invariant version of the standard model (mϕ=0 at tree level) extended by the dilaton (σ ). The tree-level potential of ϕ and σ , dictated by scale invariance, may contain nonpolynomial effective operators, e.g., ϕ6/σ2, ϕ8/σ4, ϕ10/σ6, etc. The one-loop scalar potential is scale invariant, since the loop calculations manifestly preserve the scale symmetry, with the dimensional regularization subtraction scale μ generated spontaneously by the dilaton vacuum expectation value μ ˜⟨σ ⟩. The Callan-Symanzik equation of the potential is verified in the presence of the gauge, Yukawa, and the nonpolynomial operators. The couplings of the nonpolynomial operators have nonzero beta functions that we can actually compute from the quantum potential. At the quantum level, the Higgs mass is protected by spontaneously broken scale symmetry, even though the theory is nonrenormalizable. We compare the one-loop potential to its counterpart computed in the "traditional" dimensional regularization scheme that breaks scale symmetry explicitly (μ =constant) in the presence at the tree level of the nonpolynomial operators.

  3. Description of Muzzle Blast by Modified Ideal Scaling Models

    Directory of Open Access Journals (Sweden)

    Kevin S. Fansler

    1998-01-01

    Full Text Available Gun blast data from a large variety of weapons are scaled and presented for both the instantaneous energy release and the constant energy deposition rate models. For both ideal explosion models, similar amounts of data scatter occur for the peak overpressure but the instantaneous energy release model correlated the impulse data significantly better, particularly for the region in front of the gun. Two parameters that characterize gun blast are used in conjunction with the ideal scaling models to improve the data correlation. The gun-emptying parameter works particularly well with the instantaneous energy release model to improve data correlation. In particular, the impulse, especially in the forward direction of the gun, is correlated significantly better using the instantaneous energy release model coupled with the use of the gun-emptying parameter. The use of the Mach disc location parameter improves the correlation only marginally. A predictive model is obtained from the modified instantaneous energy release correlation.

  4. On Scaling Modes and Balancing Stochastic, Discretization, and Modeling Error

    Science.gov (United States)

    Brown, J.

    2015-12-01

    We consider accuracy-cost tradeoffs and the problem of finding Pareto optimal configurations for stochastic forward and inverse problems. As the target accuracy is changed, we should use different physical models, stochastic models, discretizations, and solution algorithms. In this spectrum, we see different scientifically-relevant scaling modes, thus different opportunities and limitations on parallel computers and emerging architectures.

  5. A Scale Model of Cation Exchange for Classroom Demonstration.

    Science.gov (United States)

    Guertal, E. A.; Hattey, J. A.

    1996-01-01

    Describes a project that developed a scale model of cation exchange that can be used for a classroom demonstration. The model uses kaolinite clay, nails, plywood, and foam balls to enable students to gain a better understanding of the exchange complex of soil clays. (DDR)

  6. Modeling nano-scale grain growth of intermetallics

    Indian Academy of Sciences (India)

    Administrator

    Abstract. The Monte Carlo simulation is utilized to model the nano-scale grain growth of two nano- crystalline materials, Pd81Zr19 and RuAl. In this regard, the relationship between the real time and the time unit of simulation, i.e. Monte Carlo step (MCS), is determined. The results of modeling show that with increasing time ...

  7. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    Unknown

    modeling strategies that bridge the length-scales. The quasicontinuum method pivots on a strategy which attempts to take advantage of both conventional atomistic simulations and continuum mechanics to develop a seamless methodology for the modeling of defects such as dislocations, grain boundaries and cracks, and ...

  8. Role of scaling in the statistical modelling of finance

    Indian Academy of Sciences (India)

    Modelling the evolution of a financial index as a stochastic process is a problem awaiting a full, satisfactory solution since it was first formulated by Bachelier in 1900. Here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model.

  9. Role of scaling in the statistical modelling of finance

    Indian Academy of Sciences (India)

    Abstract. Modelling the evolution of a financial index as a stochastic process is a prob- lem awaiting a full, satisfactory solution since it was first formulated by Bachelier in 1900. Here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model.

  10. A first large-scale flood inundation forecasting model

    Science.gov (United States)

    Schumann, G. J.-P.; Neal, J. C.; Voisin, N.; Andreadis, K. M.; Pappenberger, F.; Phanthuwongpakdee, N.; Hall, A. C.; Bates, P. D.

    2013-10-01

    At present continental to global scale flood forecasting predicts at a point discharge, with little attention to detail and accuracy of local scale inundation predictions. Yet, inundation variables are of interest and all flood impacts are inherently local in nature. This paper proposes a large-scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas. The model was built for the Lower Zambezi River to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. ECMWF ensemble forecast (ENS) data were used to force the VIC (Variable Infiltration Capacity) hydrologic model, which simulated and routed daily flows to the input boundary locations of a 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of channels that play a key role in flood wave propagation. We therefore employed a novel subgrid channel scheme to describe the river network in detail while representing the floodplain at an appropriate scale. The modeling system was calibrated using channel water levels from satellite laser altimetry and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of between one and two model resolutions compared to an observed flood edge and inundation area agreement was on average 86%. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2.

  11. Transdisciplinary application of the cross-scale resilience model

    Science.gov (United States)

    Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.

    2014-01-01

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.

  12. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  13. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    focuses on large-scale applications and contributes with methods to actualise the true potential of disaggregate models. To achieve this target, contributions are given to several components of traffic assignment modelling, by (i) enabling the utilisation of the increasingly available data sources...... on individual behaviour in the model specification, (ii) proposing a method to use disaggregate Revealed Preference (RP) data to estimate utility functions and provide evidence on the value of congestion and the value of reliability, (iii) providing a method to account for individual mis...... is essential in the development and validation of realistic models for large-scale applications. Nowadays, modern technology facilitates easy access to RP data and allows large-scale surveys. The resulting datasets are, however, usually very large and hence data processing is necessary to extract the pieces...

  14. A first large-scale flood inundation forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  15. Observed Scaling in Clouds and Precipitation and Scale Incognizance in Regional to Global Atmospheric Models

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Travis A.; Li, Fuyu; Collins, William D.; Rauscher, Sara; Ringler, Todd; Taylor, Mark; Hagos, Samson M.; Leung, Lai-Yung R.

    2013-12-01

    We use observations of robust scaling behavior in clouds and precipitation to derive constraints on how partitioning of precipitation should change with model resolution. Our analysis indicates that 90-99% of stratiform precipitation should occur in clouds that are resolvable by contemporary climate models (e.g., with 200 km or finer grid spacing). Furthermore, this resolved fraction of stratiform precipitation should increase sharply with resolution, such that effectively all stratiform precipitation should be resolvable above scales of ~50 km. We show that the Community Atmosphere Model (CAM) and the Weather Research and Forecasting (WRF) model also exhibit the robust cloud and precipitation scaling behavior that is present in observations, yet the resolved fraction of stratiform precipitation actually decreases with increasing model resolution. A suite of experiments with multiple dynamical cores provides strong evidence that this `scale-incognizant' behavior originates in one of the CAM4 parameterizations. An additional set of sensitivity experiments rules out both convection parameterizations, and by a process of elimination these results implicate the stratiform cloud and precipitation parameterization. Tests with the CAM5 physics package show improvements in the resolution-dependence of resolved cloud fraction and resolved stratiform precipitation fraction.

  16. Multi-scale atmospheric composition modelling for the Balkan region

    Science.gov (United States)

    Ganev, Kostadin; Syrakov, Dimiter; Todorova, Angelina; Prodanova, Maria; Atanasov, Emanouil; Gurov, Todor; Karaivanova, Aneta; Miloshev, Nikolai; Gadzhev, Georgi; Jordanov, Georgi

    2010-05-01

    Overview The present work describes the progress in developing of an integrated, multi-scale Balkan region oriented modeling system. The main activities and achievements at this stage of the work are: Creating, enriching and updating the necessary physiographic, emission and meteorological data bases; Installation of the models for GRID application, model tuning and validation; Extensive numerical simulations on regional (Balkan Peninsula) and local (Bulgaria) scales. Objevtives: The present work describes the progress of an application developed by the Environmental VO of the 7FP project SEE-GRID eInfrastructure for regional eScience. The application aims at developing of an integrated, multi-scale Balkan region oriented modelling system, which would be able to: -Study the atmospheric pollution transport and transformation processes (accounting also for heterogeneous chemistry and the importance of aerosols for air quality and climate) from urban to local to regional (Balkan) scales; -Track and characterize the main pathways and processes that lead to atmospheric composition formation in different scales; -Account for the biosphere-atmosphere exchange as a source and receptor of atmospheric chemical species; -Provide high quality scientifically robust assessments of the air quality and its origin, thus facilitating formulation of pollution mitigation strategies at national and Balkan level. The application is based on US EPA Models-3 system. Description of work: The main activities and achievements at this still preparatory stage of the work are: 1.) Creating, enriching and updating the necessary physiographic, emission and meteorological data bases 2.) Installation of the models for GRID application, model tuning and validation, numerical experiments and interpretation of the results: The US EPA Models 3 system is installed; software for emission speciation and for introducing emission temporal profiles is created, a procedure for calculating biogenic VOC

  17. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    Energy Technology Data Exchange (ETDEWEB)

    P. Dixon

    2004-04-05

    The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Report is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The

  18. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  19. Model Scaling of Hydrokinetic Ocean Renewable Energy Systems

    Science.gov (United States)

    von Ellenrieder, Karl; Valentine, William

    2013-11-01

    Numerical simulations are performed to validate a non-dimensional dynamic scaling procedure that can be applied to subsurface and deeply moored systems, such as hydrokinetic ocean renewable energy devices. The prototype systems are moored in water 400 m deep and include: subsurface spherical buoys moored in a shear current and excited by waves; an ocean current turbine excited by waves; and a deeply submerged spherical buoy in a shear current excited by strong current fluctuations. The corresponding model systems, which are scaled based on relative water depths of 10 m and 40 m, are also studied. For each case examined, the response of the model system closely matches the scaled response of the corresponding full-sized prototype system. The results suggest that laboratory-scale testing of complete ocean current renewable energy systems moored in a current is possible. This work was supported by the U.S. Southeast National Marine Renewable Energy Center (SNMREC).

  20. Intermediate time scaling in classical continuous-spin models

    CERN Document Server

    Oh, S K; Chung, J S

    1999-01-01

    The time-dependent total spin correlation functions of the two- and the three-dimensional classical XY models seem to have a very narrow first dynamic scaling interval and, after this interval, a much broader anomalous second dynamic scaling interval appears. In this paper, this intriguing feature found in our previous work is re-examined. By introducing a phenomenological characteristic time for this intermediate time interval, the second dynamic scaling behavior can be explained. Moreover, the dynamic critical exponent found from this novel characteristic time is found to be identical to that found from the usual dynamic scaling theory developed in the wave vector and frequency domain. For continuous spin models, in which the spin variable related to a long-range order parameter is not a constant of motion, our method yielded the dynamic critical exponent with less computational efforts.

  1. Scale genesis and gravitational wave in a classically scale invariant extension of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Kubo, Jisuke [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Yamada, Masatoshi [Department of Physics, Kyoto University,Kyoto 606-8502 (Japan); Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, 69120 Heidelberg (Germany)

    2016-12-01

    We assume that the origin of the electroweak (EW) scale is a gauge-invariant scalar-bilinear condensation in a strongly interacting non-abelian gauge sector, which is connected to the standard model via a Higgs portal coupling. The dynamical scale genesis appears as a phase transition at finite temperature, and it can produce a gravitational wave (GW) background in the early Universe. We find that the critical temperature of the scale phase transition lies above that of the EW phase transition and below few O(100) GeV and it is strongly first-order. We calculate the spectrum of the GW background and find the scale phase transition is strong enough that the GW background can be observed by DECIGO.

  2. Large scale stochastic spatio-temporal modelling with PCRaster

    Science.gov (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  3. Deconfined Quantum Criticality, Scaling Violations, and Classical Loop Models

    Science.gov (United States)

    Nahum, Adam; Chalker, J. T.; Serna, P.; Ortuño, M.; Somoza, A. M.

    2015-10-01

    Numerical studies of the transition between Néel and valence bond solid phases in two-dimensional quantum antiferromagnets give strong evidence for the remarkable scenario of deconfined criticality, but display strong violations of finite-size scaling that are not yet understood. We show how to realize the universal physics of the Néel-valence-bond-solid (VBS) transition in a three-dimensional classical loop model (this model includes the subtle interference effect that suppresses hedgehog defects in the Néel order parameter). We use the loop model for simulations of unprecedentedly large systems (up to linear size L =512 ). Our results are compatible with a continuous transition at which both Néel and VBS order parameters are critical, and we do not see conventional signs of first-order behavior. However, we show that the scaling violations are stronger than previously realized and are incompatible with conventional finite-size scaling, even if allowance is made for a weakly or marginally irrelevant scaling variable. In particular, different approaches to determining the anomalous dimensions ηVBS and ηN é el yield very different results. The assumption of conventional finite-size scaling leads to estimates that drift to negative values at large sizes, in violation of the unitarity bounds. In contrast, the decay with distance of critical correlators on scales much smaller than system size is consistent with large positive anomalous dimensions. Barring an unexpected reversal in behavior at still larger sizes, this implies that the transition, if continuous, must show unconventional finite-size scaling, for example, from an additional dangerously irrelevant scaling variable. Another possibility is an anomalously weak first-order transition. By analyzing the renormalization group flows for the noncompact CP n -1 field theory (the n -component Abelian Higgs model) between two and four dimensions, we give the simplest scenario by which an anomalously weak first

  4. Anomalous scaling in an age-dependent branching model.

    Science.gov (United States)

    Keller-Schmidt, Stephanie; Tuğrul, Murat; Eguíluz, Víctor M; Hernández-García, Emilio; Klemm, Konstantin

    2015-02-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age τ as τ(-α). Depending on the exponent α, the scaling of tree depth with tree size n displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition (α=1) tree depth grows as (logn)(2). This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus providing a theoretical support for age-dependent speciation and associating it to the occurrence of a critical point.

  5. Criticality in the scale invariant standard model (squared

    Directory of Open Access Journals (Sweden)

    Robert Foot

    2015-07-01

    Full Text Available We consider first the standard model Lagrangian with μh2 Higgs potential term set to zero. We point out that this classically scale invariant theory potentially exhibits radiative electroweak/scale symmetry breaking with very high vacuum expectation value (VEV for the Higgs field, 〈ϕ〉≈1017–18 GeV. Furthermore, if such a vacuum were realized then cancellation of vacuum energy automatically implies that this nontrivial vacuum is degenerate with the trivial unbroken vacuum. Such a theory would therefore be critical with the Higgs self-coupling and its beta function nearly vanishing at the symmetry breaking minimum, λ(μ=〈ϕ〉≈βλ(μ=〈ϕ〉≈0. A phenomenologically viable model that predicts this criticality property arises if we consider two copies of the standard model Lagrangian, with exact Z2 symmetry swapping each ordinary particle with a partner. The spontaneously broken vacuum can then arise where one sector gains the high scale VEV, while the other gains the electroweak scale VEV. The low scale VEV is perturbed away from zero due to a Higgs portal coupling, or via the usual small Higgs mass terms μh2, which softly break the scale invariance. In either case, the cancellation of vacuum energy requires Mt=(171.53±0.42 GeV, which is close to its measured value of (173.34±0.76 GeV.

  6. Computational Modelling of Cancer Development and Growth: Modelling at Multiple Scales and Multiscale Modelling.

    Science.gov (United States)

    Szymańska, Zuzanna; Cytowski, Maciej; Mitchell, Elaine; Macnamara, Cicely K; Chaplain, Mark A J

    2017-06-20

    In this paper, we present two mathematical models related to different aspects and scales of cancer growth. The first model is a stochastic spatiotemporal model of both a synthetic gene regulatory network (the example of a three-gene repressilator is given) and an actual gene regulatory network, the NF-[Formula: see text]B pathway. The second model is a force-based individual-based model of the development of a solid avascular tumour with specific application to tumour cords, i.e. a mass of cancer cells growing around a central blood vessel. In each case, we compare our computational simulation results with experimental data. In the final discussion section, we outline how to take the work forward through the development of a multiscale model focussed at the cell level. This would incorporate key intracellular signalling pathways associated with cancer within each cell (e.g. p53-Mdm2, NF-[Formula: see text]B) and through the use of high-performance computing be capable of simulating up to [Formula: see text] cells, i.e. the tissue scale. In this way, mathematical models at multiple scales would be combined to formulate a multiscale computational model.

  7. Automation on the generation of genome-scale metabolic models.

    Science.gov (United States)

    Reyes, R; Gamermann, D; Montagud, A; Fuente, D; Triana, J; Urchueguía, J F; de Córdoba, P Fernández

    2012-12-01

    Nowadays, the reconstruction of genome-scale metabolic models is a nonautomatized and interactive process based on decision making. This lengthy process usually requires a full year of one person's work in order to satisfactory collect, analyze, and validate the list of all metabolic reactions present in a specific organism. In order to write this list, one manually has to go through a huge amount of genomic, metabolomic, and physiological information. Currently, there is no optimal algorithm that allows one to automatically go through all this information and generate the models taking into account probabilistic criteria of unicity and completeness that a biologist would consider. This work presents the automation of a methodology for the reconstruction of genome-scale metabolic models for any organism. The methodology that follows is the automatized version of the steps implemented manually for the reconstruction of the genome-scale metabolic model of a photosynthetic organism, Synechocystis sp. PCC6803. The steps for the reconstruction are implemented in a computational platform (COPABI) that generates the models from the probabilistic algorithms that have been developed. For validation of the developed algorithm robustness, the metabolic models of several organisms generated by the platform have been studied together with published models that have been manually curated. Network properties of the models, like connectivity and average shortest mean path of the different models, have been compared and analyzed.

  8. ScaleNet: A literature-based model of scale insect biology and systematics

    Science.gov (United States)

    Scale insects (Hemiptera: Coccoidea) are small herbivorous insects found in all continents except Antarctica. They are extremely invasive, and many species are serious agricultural pests. They are also emerging models for studies of the evolution of genetic systems, endosymbiosis, and plant-insect i...

  9. From Field- to Landscape-Scale Vadose Zone Processes: Scale Issues, Modeling, and Monitoring

    NARCIS (Netherlands)

    Corwin, D.L.; Hopmans, J.; Rooij, de G.H.

    2006-01-01

    Modeling and monitoring vadose zone processes across multiple scales is a fundamental component of many environmental and natural resource issues including nonpoint source (NPS) pollution, watershed management, and nutrient management, to mention just a few. In this special section in Vadose Zone

  10. Scaling of Core Material in Rubble Mound Breakwater Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Liu, Z.; Troch, P.

    1999-01-01

    The permeability of the core material influences armour stability, wave run-up and wave overtopping. The main problem related to the scaling of core materials in models is that the hydraulic gradient and the pore velocity are varying in space and time. This makes it impossible to arrive at a fully...... correct scaling. The paper presents an empirical formula for the estimation of the wave induced pressure gradient in the core, based on measurements in models and a prototype. The formula, together with the Forchheimer equation can be used for the estimation of pore velocities in cores. The paper proposes...... that the diameter of the core material in models is chosen in such a way that the Froude scale law holds for a characteristic pore velocity. The characteristic pore velocity is chosen as the average velocity of a most critical area in the core with respect to porous flow. Finally the method is demonstrated...

  11. Multi Scale Models for Flexure Deformation in Sheet Metal Forming

    Directory of Open Access Journals (Sweden)

    Di Pasquale Edmondo

    2016-01-01

    Full Text Available This paper presents the application of multi scale techniques to the simulation of sheet metal forming using the one-step method. When a blank flows over the die radius, it undergoes a complex cycle of bending and unbending. First, we describe an original model for the prediction of residual plastic deformation and stresses in the blank section. This model, working on a scale about one hundred times smaller than the element size, has been implemented in SIMEX, one-step sheet metal forming simulation code. The utilisation of this multi-scale modeling technique improves greatly the accuracy of the solution. Finally, we discuss the implications of this analysis on the prediction of springback in metal forming.

  12. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate......Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...

  13. Evaluation of Icing Scaling on Swept NACA 0012 Airfoil Models

    Science.gov (United States)

    Tsao, Jen-Ching; Lee, Sam

    2012-01-01

    Icing scaling tests in the NASA Glenn Icing Research Tunnel (IRT) were performed on swept wing models using existing recommended scaling methods that were originally developed for straight wing. Some needed modifications on the stagnation-point local collection efficiency (i.e., beta(sub 0) calculation and the corresponding convective heat transfer coefficient for swept NACA 0012 airfoil models have been studied and reported in 2009, and the correlations will be used in the current study. The reference tests used a 91.4-cm chord, 152.4-cm span, adjustable sweep airfoil model of NACA 0012 profile at velocities of 100 and 150 knot and MVD of 44 and 93 mm. Scale-to-reference model size ratio was 1:2.4. All tests were conducted at 0deg angle of attack (AoA) and 45deg sweep angle. Ice shape comparison results were presented for stagnation-point freezing fractions in the range of 0.4 to 1.0. Preliminary results showed that good scaling was achieved for the conditions test by using the modified scaling methods developed for swept wing icing.

  14. Multiple time scales in multi-state models.

    Science.gov (United States)

    Iacobelli, Simona; Carstensen, Bendix

    2013-12-30

    In multi-state models, it has been the tradition to model all transition intensities on one time scale, usually the time since entry into the study ('clock-forward' approach). The effect of time since an intermediate event has been accommodated either by changing the time scale to time since entry to the new state ('clock-back' approach) or by including the time at entry to the new state as a covariate. In this paper, we argue that the choice of time scale for the various transitions in a multi-state model should be dealt with as an empirical question, as also the question of whether a single time scale is sufficient. We illustrate that these questions are best addressed by using parametric models for the transition rates, as opposed to the traditional Cox-model-based approaches. Specific advantages are that dependence of failure rates on multiple time scales can be made explicit and described in informative graphical displays. Using a single common time scale for all transitions greatly facilitates computations of probabilities of being in a particular state at a given time, because the machinery from the theory of Markov chains can be applied. However, a realistic model for transition rates is preferable, especially when the focus is not on prediction of final outcomes from start but on the analysis of instantaneous risk or on dynamic prediction. We illustrate the various approaches using a data set from stem cell transplant in leukemia and provide supplementary online material in R. Copyright © 2013 John Wiley & Sons, Ltd.

  15. A catchment scale water balance model for FIFE

    Science.gov (United States)

    Famiglietti, J. S.; Wood, E. F.; Sivapalan, M.; Thongs, D. J.

    1992-01-01

    A catchment scale water balance model is presented and used to predict evaporation from the King's Creek catchment at the First ISLSCP Field Experiment site on the Konza Prairie, Kansas. The model incorporates spatial variability in topography, soils, and precipitation to compute the land surface hydrologic fluxes. A network of 20 rain gages was employed to measure rainfall across the catchment in the summer of 1987. These data were spatially interpolated and used to drive the model during storm periods. During interstorm periods the model was driven by the estimated potential evaporation, which was calculated using net radiation data collected at site 2. Model-computed evaporation is compared to that observed, both at site 2 (grid location 1916-BRS) and the catchment scale, for the simulation period from June 1 to October 9, 1987.

  16. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  17. Wind Farm Wake Models From Full Scale Data

    DEFF Research Database (Denmark)

    Knudsen, Torben; Bak, Thomas

    2012-01-01

    This investigation is part of the EU FP7 project “Distributed Control of Large-Scale Offshore Wind Farms”. The overall goal in this project is to develop wind farm controllers giving power set points to individual turbines in the farm in order to minimise mechanical loads and optimise power. One...... on real full scale data. The modelling is based on so called effective wind speed. It is shown that there is a wake for a wind direction range of up to 20 degrees. Further, when accounting for the wind direction it is shown that the two model structures considered can both fit the experimental data...

  18. Scalar dark matter in scale invariant standard model

    Energy Technology Data Exchange (ETDEWEB)

    Ghorbani, Karim [Physics Department, Faculty of Sciences,Arak University, Arak 38156-8-8349 (Iran, Islamic Republic of); Ghorbani, Hossein [Institute for Research in Fundamental Sciences (IPM),School of Particles and Accelerators, P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)

    2016-04-05

    We investigate single and two-component scalar dark matter scenarios in classically scale invariant standard model which is free of the hierarchy problem in the Higgs sector. We show that despite the very restricted space of parameters imposed by the scale invariance symmetry, both single and two-component scalar dark matter models overcome the direct and indirect constraints provided by the Planck/WMAP observational data and the LUX/Xenon100 experiment. We comment also on the radiative mass corrections of the classically massless scalon that plays a crucial role in our study.

  19. Use of genome-scale microbial models for metabolic engineering

    DEFF Research Database (Denmark)

    Patil, Kiran Raosaheb; Åkesson, M.; Nielsen, Jens

    2004-01-01

    network structures. The major challenge for metabolic engineering in the post-genomic era is to broaden its design methodologies to incorporate genome-scale biological data. Genome-scale stoichiometric models of microorganisms represent a first step in this direction.......Metabolic engineering serves as an integrated approach to design new cell factories by providing rational design procedures and valuable mathematical and experimental tools. Mathematical models have an important role for phenotypic analysis, but can also be used for the design of optimal metabolic...

  20. Low-scale inflation and supersymmetry breaking in racetrack models

    Science.gov (United States)

    Allahverdi, Rouzbeh; Dutta, Bhaskar; Sinha, Kuver

    2010-04-01

    In many moduli stabilization schemes in string theory, the scale of inflation appears to be of the same order as the scale of supersymmetry breaking. For low-scale supersymmetry breaking, therefore, the scale of inflation should also be low, unless this correlation is avoided in specific models. We explore such a low-scale inflationary scenario in a racetrack model with a single modulus in type IIB string theory. Inflation occurs near a point of inflection in the Kähler modulus potential. Obtaining acceptable cosmological density perturbations leads to the introduction of magnetized D7-branes sourcing nonperturbative superpotentials. The gravitino mass, m3/2, is chosen to be around 30 TeV, so that gravitinos that are produced in the inflaton decay do not affect big-bang nucleosynthesis. Supersymmetry is communicated to the visible sector by a mixture of anomaly and modulus mediation. We find that the two sources contribute equally to the gaugino masses, while scalar masses are decided mainly by anomaly contribution. This happens as a result of the low scale of inflation and can be probed at the LHC.

  1. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  2. Ares I Scale Model Acoustic Test Overpressure Results

    Science.gov (United States)

    Casiano, M. J.; Alvord, D. A.; McDaniels, D. M.

    2011-01-01

    A summary of the overpressure environment from the 5% Ares I Scale Model Acoustic Test (ASMAT) and the implications to the full-scale Ares I are presented in this Technical Memorandum. These include the scaled environment that would be used for assessing the full-scale Ares I configuration, observations, and team recommendations. The ignition transient is first characterized and described, the overpressure suppression system configuration is then examined, and the final environment characteristics are detailed. The recommendation for Ares I is to keep the space shuttle heritage ignition overpressure (IOP) suppression system (below-deck IOP water in the launch mount and mobile launcher and also the crest water on the main flame deflector) and the water bags.

  3. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    plant [3]. The goal of the project is to utilize realtime data extracted from the large scale facility to formulate and validate first principle dynamic models of the plant. These models are then further exploited to derive model-based tools for process optimization, advanced control and real...... with building a plantwide model-based optimization layer, which searches for optimal values regarding the pretreatment temperature, enzyme dosage in liquefaction, and yeast seed in fermentation such that profit is maximized [7]. When biomass is pretreated, by-products are also created that affect the downstream...

  4. Design and Modelling of Small Scale Low Temperature Power Cycles

    DEFF Research Database (Denmark)

    Wronski, Jorrit

    he work presented in this report contributes to the state of the art within design and modelling of small scale low temperature power cycles. The study is divided into three main parts: (i) fluid property evaluation, (ii) expansion device investigations and (iii) heat exchanger performance. The t...... scale plate heat exchanger. Working towards a validation of heat transfer correlations for ORC conditions, a new test rig was designed and built. The test facility can be used to study heat transfer in both ORC and high temperature heat pump systems.......he work presented in this report contributes to the state of the art within design and modelling of small scale low temperature power cycles. The study is divided into three main parts: (i) fluid property evaluation, (ii) expansion device investigations and (iii) heat exchanger performance...

  5. A scale-free neural network for modelling neurogenesis

    Science.gov (United States)

    Perotti, Juan I.; Tamarit, Francisco A.; Cannas, Sergio A.

    2006-11-01

    In this work we introduce a neural network model for associative memory based on a diluted Hopfield model, which grows through a neurogenesis algorithm that guarantees that the final network is a small-world and scale-free one. We also analyze the storage capacity of the network and prove that its performance is larger than that measured in a randomly dilute network with the same connectivity.

  6. Multi-scale Modeling of Plasticity in Tantalum.

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weinberger, Christopher [Drexel Univ., Philadelphia, PA (United States)

    2015-12-01

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct

  7. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  8. Phenomenological aspects of no-scale inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics,King’s College London,WC2R 2LS London (United Kingdom); Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Garcia, Marcos A.G. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics andAstronomy, Texas A& M University,College Station, 77843 Texas (United States); Astroparticle Physics Group, Houston Advanced Research Center (HARC), Mitchell Campus, Woodlands, 77381 Texas (United States); Academy of Athens, Division of Natural Sciences, 28 Panepistimiou Avenue, 10679 Athens (Greece); Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States)

    2015-10-01

    We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0}=B{sub 0}=A{sub 0}=0, of the CMSSM type with universal A{sub 0} and m{sub 0}≠0 at a high scale, and of the mSUGRA type with A{sub 0}=B{sub 0}+m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2}≠0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.

  9. Large-Scale Modeling of Wordform Learning and Representation

    Science.gov (United States)

    Sibley, Daragh E.; Kello, Christopher T.; Plaut, David C.; Elman, Jeffrey L.

    2008-01-01

    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the "sequence encoder" is used to learn…

  10. Small-Scale Helicopter Automatic Autorotation : Modeling, Guidance, and Control

    NARCIS (Netherlands)

    Taamallah, S.

    2015-01-01

    Our research objective consists in developing a, model-based, automatic safety recovery system, for a small-scale helicopter Unmanned Aerial Vehicle (UAV) in autorotation, i.e. an engine OFF flight condition, that safely flies and lands the helicopter to a pre-specified ground location. In pursuit

  11. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  12. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DEFF Research Database (Denmark)

    King, Zachary A.; Lu, Justin; Dräger, Andreas

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized...... redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases....... Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource...

  13. Disappearing scales in carps: Re-visiting Kirpichnikov's model on the genetics of scale pattern formation

    KAUST Repository

    Casas, Laura

    2013-12-30

    The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the \\'S\\' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called \\'N\\' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov\\'s work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.

  14. Disappearing scales in carps: re-visiting Kirpichnikov's model on the genetics of scale pattern formation.

    Directory of Open Access Journals (Sweden)

    Laura Casas

    Full Text Available The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude × nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype, those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here. We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dose-dependent effect probably due to a concerted action of multiple pathways involved in scale formation.

  15. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  16. MODELLING FINE SCALE MOVEMENT CORRIDORS FOR THE TRICARINATE HILL TURTLE

    Directory of Open Access Journals (Sweden)

    I. Mondal

    2016-06-01

    Full Text Available Habitat loss and the destruction of habitat connectivity can lead to species extinction by isolation of population. Identifying important habitat corridors to enhance habitat connectivity is imperative for species conservation by preserving dispersal pattern to maintain genetic diversity. Circuit theory is a novel tool to model habitat connectivity as it considers habitat as an electronic circuit board and species movement as a certain amount of current moving around through different resistors in the circuit. Most studies involving circuit theory have been carried out at small scales on large ranging animals like wolves or pumas, and more recently on tigers. This calls for a study that tests circuit theory at a large scale to model micro-scale habitat connectivity. The present study on a small South-Asian geoemydid, the Tricarinate Hill-turtle (Melanochelys tricarinata, focuses on habitat connectivity at a very fine scale. The Tricarinate has a small body size (carapace length: 127–175 mm and home range (8000–15000 m2, with very specific habitat requirements and movement patterns. We used very high resolution Worldview satellite data and extensive field observations to derive a model of landscape permeability at 1 : 2,000 scale to suit the target species. Circuit theory was applied to model potential corridors between core habitat patches for the Tricarinate Hill-turtle. The modelled corridors were validated by extensive ground tracking data collected using thread spool technique and found to be functional. Therefore, circuit theory is a promising tool for accurately identifying corridors, to aid in habitat studies of small species.

  17. Modelling Fine Scale Movement Corridors for the Tricarinate Hill Turtle

    Science.gov (United States)

    Mondal, I.; Kumar, R. S.; Habib, B.; Talukdar, G.

    2016-06-01

    Habitat loss and the destruction of habitat connectivity can lead to species extinction by isolation of population. Identifying important habitat corridors to enhance habitat connectivity is imperative for species conservation by preserving dispersal pattern to maintain genetic diversity. Circuit theory is a novel tool to model habitat connectivity as it considers habitat as an electronic circuit board and species movement as a certain amount of current moving around through different resistors in the circuit. Most studies involving circuit theory have been carried out at small scales on large ranging animals like wolves or pumas, and more recently on tigers. This calls for a study that tests circuit theory at a large scale to model micro-scale habitat connectivity. The present study on a small South-Asian geoemydid, the Tricarinate Hill-turtle (Melanochelys tricarinata), focuses on habitat connectivity at a very fine scale. The Tricarinate has a small body size (carapace length: 127-175 mm) and home range (8000-15000 m2), with very specific habitat requirements and movement patterns. We used very high resolution Worldview satellite data and extensive field observations to derive a model of landscape permeability at 1 : 2,000 scale to suit the target species. Circuit theory was applied to model potential corridors between core habitat patches for the Tricarinate Hill-turtle. The modelled corridors were validated by extensive ground tracking data collected using thread spool technique and found to be functional. Therefore, circuit theory is a promising tool for accurately identifying corridors, to aid in habitat studies of small species.

  18. Genome-scale constraint-based modeling of Geobacter metallireducens

    Directory of Open Access Journals (Sweden)

    Famili Iman

    2009-01-01

    Full Text Available Abstract Background Geobacter metallireducens was the first organism that can be grown in pure culture to completely oxidize organic compounds with Fe(III oxide serving as electron acceptor. Geobacter species, including G. sulfurreducens and G. metallireducens, are used for bioremediation and electricity generation from waste organic matter and renewable biomass. The constraint-based modeling approach enables the development of genome-scale in silico models that can predict the behavior of complex biological systems and their responses to the environments. Such a modeling approach was applied to provide physiological and ecological insights on the metabolism of G. metallireducens. Results The genome-scale metabolic model of G. metallireducens was constructed to include 747 genes and 697 reactions. Compared to the G. sulfurreducens model, the G. metallireducens metabolic model contains 118 unique reactions that reflect many of G. metallireducens' specific metabolic capabilities. Detailed examination of the G. metallireducens model suggests that its central metabolism contains several energy-inefficient reactions that are not present in the G. sulfurreducens model. Experimental biomass yield of G. metallireducens growing on pyruvate was lower than the predicted optimal biomass yield. Microarray data of G. metallireducens growing with benzoate and acetate indicated that genes encoding these energy-inefficient reactions were up-regulated by benzoate. These results suggested that the energy-inefficient reactions were likely turned off during G. metallireducens growth with acetate for optimal biomass yield, but were up-regulated during growth with complex electron donors such as benzoate for rapid energy generation. Furthermore, several computational modeling approaches were applied to accelerate G. metallireducens research. For example, growth of G. metallireducens with different electron donors and electron acceptors were studied using the genome-scale

  19. Comparing the Hydrologic and Watershed Processes between a Full Scale Stochastic Model Versus a Scaled Physical Model of Bell Canyon

    Science.gov (United States)

    Hernandez, K. F.; Shah-Fairbank, S.

    2016-12-01

    The San Dimas Experimental Forest has been designated as a research area by the United States Forest Service for use as a hydrologic testing facility since 1933 to investigate watershed hydrology of the 27 square mile land. Incorporation of a computer model provides validity to the testing of the physical model. This study focuses on San Dimas Experimental Forest's Bell Canyon, one of the triad of watersheds contained within the Big Dalton watershed of the San Dimas Experimental Forest. A scaled physical model was constructed of Bell Canyon to highlight watershed characteristics and each's effect on runoff. The physical model offers a comprehensive visualization of a natural watershed and can vary the characteristics of rainfall intensity, slope, and roughness through interchangeable parts and adjustments to the system. The scaled physical model is validated and calibrated through a HEC-HMS model to assure similitude of the system. Preliminary results of the physical model suggest that a 50-year storm event can be represented by a peak discharge of 2.2 X 10-3 cfs. When comparing the results to HEC-HMS, this equates to a flow relationship of approximately 1:160,000, which can be used to model other return periods. The completion of the Bell Canyon physical model can be used for educational instruction in the classroom, outreach in the community, and further research using the model as an accurate representation of the watershed present in the San Dimas Experimental Forest.

  20. Large scale modelling of catastrophic floods in Italy

    Science.gov (United States)

    Azemar, Frédéric; Nicótina, Ludovico; Sassi, Maximiliano; Savina, Maurizio; Hilberts, Arno

    2017-04-01

    The RMS European Flood HD model® is a suite of country scale flood catastrophe models covering 13 countries throughout continental Europe and the UK. The models are developed with the goal of supporting risk assessment analyses for the insurance industry. Within this framework RMS is developing a hydrologic and inundation model for Italy. The model aims at reproducing the hydrologic and hydraulic properties across the domain through a modeling chain. A semi-distributed hydrologic model that allows capturing the spatial variability of the runoff formation processes is coupled with a one-dimensional river routing algorithm and a two-dimensional (depth averaged) inundation model. This model setup allows capturing the flood risk from both pluvial (overland flow) and fluvial flooding. Here we describe the calibration and validation methodologies for this modelling suite applied to the Italian river basins. The variability that characterizes the domain (in terms of meteorology, topography and hydrologic regimes) requires a modeling approach able to represent a broad range of meteo-hydrologic regimes. The calibration of the rainfall-runoff and river routing models is performed by means of a genetic algorithm that identifies the set of best performing parameters within the search space over the last 50 years. We first establish the quality of the calibration parameters on the full hydrologic balance and on individual discharge peaks by comparing extreme statistics to observations over the calibration period on several stations. The model is then used to analyze the major floods in the country; we discuss the different meteorological setup leading to the historical events and the physical mechanisms that induced these floods. We can thus assess the performance of RMS' hydrological model in view of the physical mechanisms leading to flood and highlight the main controls on flood risk modelling throughout the country. The model's ability to accurately simulate antecedent

  1. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    of nodes with a shared connectivity pattern. Modelling the brain in great detail on a whole-brain scale is essential to fully understand the underlying organization of the brain and reveal the relations between structure and function, that allows sophisticated cognitive behaviour to emerge from ensembles...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  2. Modeling fast and slow earthquakes at various scales.

    Science.gov (United States)

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  3. Classical scale invariance in the inert doublet model

    Energy Technology Data Exchange (ETDEWEB)

    Plascencia, Alexis D. [Institute for Particle Physics Phenomenology, Department of Physics,Durham University, Durham DH1 3LE (United Kingdom)

    2015-09-04

    The inert doublet model (IDM) is a minimal extension of the Standard Model (SM) that can account for the dark matter in the universe. Naturalness arguments motivate us to study whether the model can be embedded into a theory with dynamically generated scales. In this work we study a classically scale invariant version of the IDM with a minimal hidden sector, which has a U(1){sub CW} gauge symmetry and a complex scalar Φ. The mass scale is generated in the hidden sector via the Coleman-Weinberg (CW) mechanism and communicated to the two Higgs doublets via portal couplings. Since the CW scalar remains light, acquires a vacuum expectation value and mixes with the SM Higgs boson, the phenomenology of this construction can be modified with respect to the traditional IDM. We analyze the impact of adding this CW scalar and the Z{sup ′} gauge boson on the calculation of the dark matter relic density and on the spin-independent nucleon cross section for direct detection experiments. Finally, by studying the RG equations we find regions in parameter space which remain valid all the way up to the Planck scale.

  4. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  5. Deconfined Quantum Criticality, Scaling Violations, and Classical Loop Models

    Directory of Open Access Journals (Sweden)

    Adam Nahum

    2015-12-01

    Full Text Available Numerical studies of the transition between Néel and valence bond solid phases in two-dimensional quantum antiferromagnets give strong evidence for the remarkable scenario of deconfined criticality, but display strong violations of finite-size scaling that are not yet understood. We show how to realize the universal physics of the Néel–valence-bond-solid (VBS transition in a three-dimensional classical loop model (this model includes the subtle interference effect that suppresses hedgehog defects in the Néel order parameter. We use the loop model for simulations of unprecedentedly large systems (up to linear size L=512. Our results are compatible with a continuous transition at which both Néel and VBS order parameters are critical, and we do not see conventional signs of first-order behavior. However, we show that the scaling violations are stronger than previously realized and are incompatible with conventional finite-size scaling, even if allowance is made for a weakly or marginally irrelevant scaling variable. In particular, different approaches to determining the anomalous dimensions η_{VBS} and η_{Néel} yield very different results. The assumption of conventional finite-size scaling leads to estimates that drift to negative values at large sizes, in violation of the unitarity bounds. In contrast, the decay with distance of critical correlators on scales much smaller than system size is consistent with large positive anomalous dimensions. Barring an unexpected reversal in behavior at still larger sizes, this implies that the transition, if continuous, must show unconventional finite-size scaling, for example, from an additional dangerously irrelevant scaling variable. Another possibility is an anomalously weak first-order transition. By analyzing the renormalization group flows for the noncompact CP^{n-1} field theory (the n-component Abelian Higgs model between two and four dimensions, we give the simplest scenario by which an

  6. Modelling galaxy merger time-scales and tidal destruction

    Science.gov (United States)

    Simha, Vimal; Cole, Shaun

    2017-12-01

    We present a model for the dynamical evolution of subhaloes based on an approach combining numerical and analytical methods. Our method is based on tracking subhaloes in an N-body simulation up to the latest epoch that it can be resolved, and applying an analytic prescription for its merger time-scale that takes dynamical friction and tidal disruption into account. When applied to cosmological N-body simulations with mass resolutions that differ by two orders of magnitude, the technique produces halo occupation distributions that agree to within 3 per cent. This model has now been implemented in the GALFORM semi-analytic model of galaxy formation.

  7. Toward multi-scale computational modeling in developmental disability research.

    Science.gov (United States)

    Dammann, O; Follett, P

    2011-06-01

    The field of theoretical neuroscience is gaining increasing recognition. Virtually all areas of neuroscience offer potential linkage points for computational work. In developmental neuroscience, main areas of research are neural development and connectivity, and connectionist modeling of cognitive development. In this paper, we suggest that computational models can be helpful tools for understanding the pathogenesis and consequences of perinatal brain damage and subsequent developmental disability. In particular, designing multi-scale computational models should be considered by developmental neuroscientists interested in helping reduce the risk for developmental disabilities. Georg Thieme Verlag Stuttgart · New york.

  8. Atmospheric CO2 modeling at the regional scale: an intercomparison of 5 meso-scale atmospheric models

    Directory of Open Access Journals (Sweden)

    G. Pérez-Landa

    2007-12-01

    Full Text Available Atmospheric CO2 modeling in interaction with the surface fluxes, at the regional scale is developed within the frame of the European project CarboEurope-IP and its Regional Experiment component. In this context, five meso-scale meteorological models at 2 km resolution participate in an intercomparison exercise. Using a common experimental protocol that imposes a large number of rules, two days of the CarboEurope Regional Experiment Strategy (CERES campaign are simulated. A systematic evaluation of the models is done in confrontation with the observations, using statistical tools and direct comparisons. Thus, temperature and relative humidity at 2 m, wind direction, surface energy and CO2 fluxes, vertical profiles of potential temperature as well as in-situ CO2 concentrations comparisons between observations and simulations are examined. These comparisons reveal a cold bias in the simulated temperature at 2 m, the latent heat flux is often underestimated. Nevertheless, the CO2 concentrations heterogeneities are well captured by most of the models. This intercomparison exercise shows also the models ability to represent the meteorology and carbon cycling at the synoptic and regional scale in the boundary layer, but also points out some of the major shortcomings of the models.

  9. Increasing process integrity in global scale water balance models

    Science.gov (United States)

    Plöger, Lisa; Mewes, Benjamin; Oppel, Henning; Schumann, Andreas

    2017-04-01

    Hydrological models on a global or continental scale are often used to model human impact on the water balance in data scarce regions. Therefore, they are not validated for a time series of runoff measured at gauges but for long term estimates. The simplistic model GlobWat was introduced by the FAO to predict irrigation water demand based on open source data for continental catchments. Originally, the model was not designed to process time series, but to estimate the water demand on long-time averages of precipitation and evapotranspiration. Therefore the emphasis of detail of GlobWat was focused on crop evapotranspiration and water availability in agricultural regions. In our study we wanted to enhance the modelling in detail to forest evapotranspiration on the one hand and to time series simulation on the other hand. Meanwhile, we tried to keep the amount of input data as small as possible or at least limit it to open source data. Our objectives derived from case studies in the forest dominated catchments of Danube and Mississippi. With the use of Penman-Montheith equation as fundamental equation within the original GlobWat model, evapotranspiration losses in these regions could not be simulated adequately. As this being the fact, the water availability of downstream regions dominated by agriculture might be overestimated and hence estimation of irrigation demands biased. Therefore, we implemented a Shuttleworth & Calder as well as a Priestly-Taylor approach for evapotranspiration calculation of forested areas. Both models are compared and evaluated based on monthly time series validation of the model with runoff series provided by GRDC (Global Runoff Data Center). For an additional extension of the model we added a simple one-parameter snow-routine. In our presentation we compare the different stages of modelling to demonstrate the options to extent and validate these models with observed data on an appropriate scale.

  10. Comparing turbulent mixing of biogenic VOC across model scale

    Science.gov (United States)

    Li, Y.; Barth, M. C.; Steiner, A. L.

    2016-12-01

    Vertical mixing of biogenic volatile organic compounds (BVOC) in the planetary boundary layer (PBL) is very important in simulating the formation of ozone, secondary organic aerosols (SOA), and climate feedbacks. To assess the representation of vertical mixing in the atmosphere for the Baltimore-Washington DISCOVER-AQ 2011 campaign, we use two models of different scale and turbulence representation: (1) the National Center for Atmospheric Research's Large Eddy Simulation (LES), and (2) the Weather Research and Forecasting-Chemistry (WRF-Chem) model to simulate regional meteorology and chemistry. For WRF-Chem, we evaluate the boundary layer schemes in the model at convection-permitting scales (4km). WRF-Chem simulated vertical profiles are compared with the results from turbulence-resolving LES model under similar meteorological and chemical environment. The influence of clouds on gas and aqueous species and the impact of cloud processing at both scales are evaluated. Temporal evolutions of a surface-to-cloud concentration ratio are calculated to determine the capability of BVOC vertical mixing in WRF-Chem.

  11. Large-scale model of mammalian thalamocortical systems.

    Science.gov (United States)

    Izhikevich, Eugene M; Edelman, Gerald M

    2008-03-04

    The understanding of the structural and dynamic complexity of mammalian brains is greatly facilitated by computer simulations. We present here a detailed large-scale thalamocortical model based on experimental measures in several mammalian species. The model spans three anatomical scales. (i) It is based on global (white-matter) thalamocortical anatomy obtained by means of diffusion tensor imaging (DTI) of a human brain. (ii) It includes multiple thalamic nuclei and six-layered cortical microcircuitry based on in vitro labeling and three-dimensional reconstruction of single neurons of cat visual cortex. (iii) It has 22 basic types of neurons with appropriate laminar distribution of their branching dendritic trees. The model simulates one million multicompartmental spiking neurons calibrated to reproduce known types of responses recorded in vitro in rats. It has almost half a billion synapses with appropriate receptor kinetics, short-term plasticity, and long-term dendritic spike-timing-dependent synaptic plasticity (dendritic STDP). The model exhibits behavioral regimes of normal brain activity that were not explicitly built-in but emerged spontaneously as the result of interactions among anatomical and dynamic processes. We describe spontaneous activity, sensitivity to changes in individual neurons, emergence of waves and rhythms, and functional connectivity on different scales.

  12. Bed form dynamics in distorted lightweight scale models

    Science.gov (United States)

    Aberle, Jochen; Henning, Martin; Ettmer, Bernd

    2016-04-01

    The adequate prediction of flow and sediment transport over bed forms presents a major obstacle for the solution of sedimentation problems in alluvial channels because bed forms affect hydraulic resistance, sediment transport, and channel morphodynamics. Moreover, bed forms can affect hydraulic habitat for biota, may introduce severe restrictions to navigation, and present a major problem for engineering structures such as water intakes and groynes. The main body of knowledge on the geometry and dynamics of bed forms such as dunes originates from laboratory and field investigations focusing on bed forms in sand bed rivers. Such investigations enable insight into the physics of the transport processes, but do not allow for the long term simulation of morphodynamic development as required to assess, for example, the effects of climate change on river morphology. On the other hand, this can be achieved through studies with distorted lightweight scale models allowing for the modification of the time scale. However, our understanding of how well bed form geometry and dynamics, and hence sediment transport mechanics, are reproduced in such models is limited. Within this contribution we explore this issue using data from investigations carried out at the Federal Waterways and Research Institute in Karlsruhe, Germany in a distorted lightweight scale model of the river Oder. The model had a vertical scale of 1:40 and a horizontal scale of 1:100, the bed material consisted of polystyrene particles, and the resulting dune geometry and dynamics were measured with a high spatial and temporal resolution using photogrammetric methods. Parameters describing both the directly measured and up-scaled dune geometry were determined using the random field approach. These parameters (e.g., standard deviation, skewness, kurtosis) will be compared to prototype observations as well as to results from the literature. Similarly, parameters describing the lightweight bed form dynamics, which

  13. Compare pilot-scale and industry-scale models of pulverized coal combustion in an ironmaking blast furnace

    Science.gov (United States)

    Shen, Yansong; Yu, Aibing; Zulli, Paul

    2013-07-01

    In order to understand the complex phenomena of pulverized coal injection (PCI) process in blast furnace (BF), mathematical models have been developed at different scales: pilot-scale model of coal combustion and industry-scale model (in-furnace model) of coal/coke combustion in a real BF respectively. This paper compares these PCI models in aspects of model developments and model capability. The model development is discussed in terms of model formulation, their new features and geometry/regions considered. The model capability is then discussed in terms of main findings followed by the model evaluation on their advantages and limitations. It is indicated that these PCI models are all able to describe PCI operation qualitatively. The in-furnace model is more reliable for simulating in-furnace phenomena of PCI operation qualitatively and quantitatively. These models are useful for understanding the flow-thermo-chemical behaviors and then optimizing the PCI operation in practice.

  14. Multi-scale modeling of the CD8 immune response

    Science.gov (United States)

    Barbarroux, Loic; Michel, Philippe; Adimy, Mostafa; Crauste, Fabien

    2016-06-01

    During the primary CD8 T-Cell immune response to an intracellular pathogen, CD8 T-Cells undergo exponential proliferation and continuous differentiation, acquiring cytotoxic capabilities to address the infection and memorize the corresponding antigen. After cleaning the organism, the only CD8 T-Cells left are antigen-specific memory cells whose role is to respond stronger and faster in case they are presented this very same antigen again. That is how vaccines work: a small quantity of a weakened pathogen is introduced in the organism to trigger the primary response, generating corresponding memory cells in the process, giving the organism a way to defend himself in case it encounters the same pathogen again. To investigate this process, we propose a non linear, multi-scale mathematical model of the CD8 T-Cells immune response due to vaccination using a maturity structured partial differential equation. At the intracellular scale, the level of expression of key proteins is modeled by a delay differential equation system, which gives the speeds of maturation for each cell. The population of cells is modeled by a maturity structured equation whose speeds are given by the intracellular model. We focus here on building the model, as well as its asymptotic study. Finally, we display numerical simulations showing the model can reproduce the biological dynamics of the cell population for both the primary response and the secondary responses.

  15. Multi-scale modeling of the CD8 immune response

    Energy Technology Data Exchange (ETDEWEB)

    Barbarroux, Loic, E-mail: loic.barbarroux@doctorant.ec-lyon.fr [Inria, Université de Lyon, UMR 5208, Institut Camille Jordan (France); Ecole Centrale de Lyon, 36 avenue Guy de Collongue, 69134 Ecully (France); Michel, Philippe, E-mail: philippe.michel@ec-lyon.fr [Inria, Université de Lyon, UMR 5208, Institut Camille Jordan (France); Ecole Centrale de Lyon, 36 avenue Guy de Collongue, 69134 Ecully (France); Adimy, Mostafa, E-mail: mostafa.adimy@inria.fr [Inria, Université de Lyon, UMR 5208, Université Lyon 1, Institut Camille Jordan, 43 Bd. du 11 novembre 1918, F-69200 Villeurbanne Cedex (France); Crauste, Fabien, E-mail: crauste@math.univ-lyon1.fr [Inria, Université de Lyon, UMR 5208, Université Lyon 1, Institut Camille Jordan, 43 Bd. du 11 novembre 1918, F-69200 Villeurbanne Cedex (France)

    2016-06-08

    During the primary CD8 T-Cell immune response to an intracellular pathogen, CD8 T-Cells undergo exponential proliferation and continuous differentiation, acquiring cytotoxic capabilities to address the infection and memorize the corresponding antigen. After cleaning the organism, the only CD8 T-Cells left are antigen-specific memory cells whose role is to respond stronger and faster in case they are presented this very same antigen again. That is how vaccines work: a small quantity of a weakened pathogen is introduced in the organism to trigger the primary response, generating corresponding memory cells in the process, giving the organism a way to defend himself in case it encounters the same pathogen again. To investigate this process, we propose a non linear, multi-scale mathematical model of the CD8 T-Cells immune response due to vaccination using a maturity structured partial differential equation. At the intracellular scale, the level of expression of key proteins is modeled by a delay differential equation system, which gives the speeds of maturation for each cell. The population of cells is modeled by a maturity structured equation whose speeds are given by the intracellular model. We focus here on building the model, as well as its asymptotic study. Finally, we display numerical simulations showing the model can reproduce the biological dynamics of the cell population for both the primary response and the secondary responses.

  16. Multi-scale Modeling of the Evolution of a Large-Scale Nourishment

    Science.gov (United States)

    Luijendijk, A.; Hoonhout, B.

    2016-12-01

    Morphological predictions are often computed using a single morphological model commonly forced with schematized boundary conditions representing the time scale of the prediction. Recent model developments are now allowing us to think and act differently. This study presents some recent developments in coastal morphological modeling focusing on flexible meshes, flexible coupling between models operating at different time scales, and a recently developed morphodynamic model for the intertidal and dry beach. This integrated modeling approach is applied to the Sand Engine mega nourishment in The Netherlands to illustrate the added-values of this integrated approach both in accuracy and computational efficiency. The state-of-the-art Delft3D Flexible Mesh (FM) model is applied at the study site under moderate wave conditions. One of the advantages is that the flexibility of the mesh structure allows a better representation of the water exchange with the lagoon and corresponding morphological behavior than with the curvilinear grid used in the previous version of Delft3D. The XBeach model is applied to compute the morphodynamic response to storm events in detail incorporating the long wave effects on bed level changes. The recently developed aeolian transport and bed change model AeoLiS is used to compute the bed changes in the intertidal and dry beach area. In order to enable flexible couplings between the three abovementioned models, a component-based environment has been developed using the BMI method. This allows a serial coupling of Delft3D FM and XBeach steered by a control module that uses a hydrodynamic time series as input (see figure). In addition, a parallel online coupling, with information exchange in each timestep will be made with the AeoLiS model that predicts the bed level changes at the intertidal and dry beach area. This study presents the first years of evolution of the Sand Engine computed with the integrated modelling approach. Detailed comparisons

  17. Modelling hydrological processes at different scales across Russian permafrost domain

    Science.gov (United States)

    Makarieva, Olga; Lebedeva, Lyudmila; Nesterova, Natalia; Vinogradova, Tatyana

    2017-04-01

    The project aims to study the interactions between permafrost and runoff generation processes across Russian Arctic domain based on hydrological modelling. The uniqueness of the approach is a unified modelling framework which allows for coupled simulations of upper permafrost dynamics and streamflow generation at different scales (from soil column to large watersheds). The base of the project is hydrological model Hydrograph (Vinogradov et al. 2011, Semenova et al. 2013, 2015; Lebedeva et al., 2015). The model algorithms combine physically-based and conceptual approaches for the description of land hydrological cycle processes, which allows for maintaining a balance between the complexity of model design and the use of limited input information. The method for modeling heat dynamics in soil is integrated into the model. Main parameters of the model are the physical properties of landscapes that may be measured (observed) in nature and are classified according to the types of soil, vegetation and other characteristics. A set of parameters specified in the studied catchments (basins analog) can be transferred to ungauged basins with similar types of the underlying surface without calibration. The results of modelling from small research watersheds to large poorly gauged river basins in different climate and landscape settings of Russian Arctic (within the Yenisey, Lena, Yana, Indigirka, Kolyma rivers basins) will be presented. Based on gained experience methodological aspects of hydrological modelling approaches in permafrost environment will be discussed. The study is partially supported by Russian Foundation for Basic Research, projects 16-35-50151 and 17-05-01138.

  18. Site-scale groundwater flow modelling of Aberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1998-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method`s self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the

  19. Experimental exploration of diffusion panel labyrinth in scale model

    Science.gov (United States)

    Vance, Mandi M.

    Small rehearsal and performance venues often lack the rich reverberation found in larger spaces. Higini Arau-Puchades has designed and implemented a system of diffusion panels in the Orchestra Rehearsal Room at the Great Theatre Liceu and the Tonhalle St. Gallen that lengthen the reverberation time. These panels defy traditional room acoustics theory which holds that adding material to a room will shorten the reverberation time. This work explores several versions of Arau-Puchades' panels and room characteristics in scale model. Reverberation times are taken from room impulse response measurements in order to better understand the unusual phenomenon. Scale modeling enables many tests but has limitations in its accuracy due to the higher frequency range involved. Further investigations are necessary to establish how the sound energy interacts with the diffusion panels and confirm their validity in a range of applications.

  20. A Rasch Model Analysis of the Mindful Attention Awareness Scale.

    Science.gov (United States)

    Goh, Hong Eng; Marais, Ida; Ireland, Michael James

    2017-04-01

    The Mindful Attention Awareness Scale was developed to measure individual differences in the tendency to be mindful. The current study examined the psychometric properties of the Mindful Attention Awareness Scale in a heterogeneous sample of 565 nonmeditators and 612 meditators using the polytomous Rasch model. The results showed that some items did not function the same way for these two groups. Overall, meditators had higher mean estimates than nonmeditators. The analysis identified a group of items as highly discriminating. Using a different model, Van Dam, Earleywine, and Borders in 2010 identified the same group of items as highly discriminating, and concluded that they were the items with the most information. Multiple pieces of evidence from the Rasch analysis showed that these items discriminate highly because of local dependence, hence do not supply independent information. We discussed how these different conclusions, based on similar findings, result from two very different paradigms in measurement.

  1. Next-generation genome-scale models for metabolic engineering

    DEFF Research Database (Denmark)

    King, Zachary A.; Lloyd, Colton J.; Feist, Adam M.

    2015-01-01

    Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict...... examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering....

  2. Vegetable parenting practices scale. Item response modeling analyses.

    Science.gov (United States)

    Chen, Tzu-An; O'Connor, Teresia M; Hughes, Sheryl O; Beltran, Alicia; Baranowski, Janice; Diep, Cassandra; Baranowski, Tom

    2015-08-01

    To evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We also tested for differences in the ways item function (called differential item functioning) across child's gender, ethnicity, age, and household income groups. Parents of 3-5 year old children completed a self-reported vegetable parenting practices scale online. Vegetable parenting practices consisted of 14 effective vegetable parenting practices and 12 ineffective vegetable parenting practices items, each with three subscales (responsiveness, structure, and control). Multidimensional polytomous item response modeling was conducted separately on effective vegetable parenting practices and ineffective vegetable parenting practices. One effective vegetable parenting practice item did not fit the model well in the full sample or across demographic groups, and another was a misfit in differential item functioning analyses across child's gender. Significant differential item functioning was detected across children's age and ethnicity groups, and more among effective vegetable parenting practices than ineffective vegetable parenting practices items. Wright maps showed items only covered parts of the latent trait distribution. The harder- and easier-to-respond ends of the construct were not covered by items for effective vegetable parenting practices and ineffective vegetable parenting practices, respectively. Several effective vegetable parenting practices and ineffective vegetable parenting practices scale items functioned differently on the basis of child's demographic characteristics; therefore, researchers should use these vegetable parenting practices scales with caution. Item response modeling should be incorporated in analyses of parenting practice questionnaires to better assess

  3. Modeling basin- and plume-scale processes of CO2 storage for full-scale deployment

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Q.; Birkholzer, J.T.; Mehnert, E.; Lin, Y.-F.; Zhang, K.

    2009-08-15

    Integrated modeling of basin- and plume-scale processes induced by full-scale deployment of CO{sub 2} storage was applied to the Mt. Simon Aquifer in the Illinois Basin. A three-dimensional mesh was generated with local refinement around 20 injection sites, with approximately 30 km spacing. A total annual injection rate of 100 Mt CO{sub 2} over 50 years was used. The CO{sub 2}-brine flow at the plume scale and the single-phase flow at the basin scale were simulated. Simulation results show the overall shape of a CO{sub 2} plume consisting of a typical gravity-override subplume in the bottom injection zone of high injectivity and a pyramid-shaped subplume in the overlying multilayered Mt. Simon, indicating the important role of a secondary seal with relatively low-permeability and high-entry capillary pressure. The secondary-seal effect is manifested by retarded upward CO{sub 2} migration as a result of multiple secondary seals, coupled with lateral preferential CO{sub 2} viscous fingering through high-permeability layers. The plume width varies from 9.0 to 13.5 km at 200 years, indicating the slow CO{sub 2} migration and no plume interference between storage sites. On the basin scale, pressure perturbations propagate quickly away from injection centers, interfere after less than 1 year, and eventually reach basin margins. The simulated pressure buildup of 35 bar in the injection area is not expected to affect caprock geomechanical integrity. Moderate pressure buildup is observed in Mt. Simon in northern Illinois. However, its impact on groundwater resources is less than the hydraulic drawdown induced by long-term extensive pumping from overlying freshwater aquifers.

  4. Ecohydrologic Modeling of Hillslope Scale Processes in Dryland Ecosystems

    Science.gov (United States)

    Franz, T. E.; King, E. G.; Lester, A.; Caylor, K. K.; Nordbotten, J.; Celia, M. A.; Rodriguez-Iturbe, I.

    2008-12-01

    Dryland ecosystem processes are governed by complex interactions between the atmosphere, soil, and vegetation that are tightly coupled through the mass balance of water. At the scale of individual hillslopes, mass balance of water is dominated by mechanisms of water redistribution which require spatially explicit representation. Fully-resolved physical models of surface and subsurface processes require numerical routines that are not trivial to solve for the spatial (hillslope) and temporal (many plant generations) scales of ecohydrologic interest. In order to reduce model complexity, we have used small-scale field data to derive empirical surface flux terms for representative patches (bare soil, grass, and tree) in a dryland ecosystem of central Kenya. The model is coupled spatially in the subsurface by an analytical solution to the Boussinesq equation for a sloping slab. The semi-analytical model is spatially explicit driven by pulses of precipitation over a simulation period that represents many plant generations. By examining long-term model dynamics, we are able to investigate the principles of self-organization and optimization (maximization of plant water use and minimization of water lost to the system) of dryland ecosystems for various initial conditions and climatic variability. Precipitation records in central Kenya reveal a shift to more intense infrequent rain events with a constant annual total. The range of stable solutions of initial conditions and climatic variability are important to land management agencies for addressing current grazing practices and future policies. The model is a quantitative tool for addressing perturbations to the system and the overall sustainability of pastoralist activities in dryland ecosystems.

  5. Disaggregation, aggregation and spatial scaling in hydrological modelling

    Science.gov (United States)

    Becker, Alfred; Braun, Peter

    1999-04-01

    A typical feature of the land surface is its heterogeneity in terms of the spatial variability of land surface characteristics and parameters controlling physical/hydrological, biological, and other related processes. Different forms and degrees of heterogeneity need to be taken into account in hydrological modelling. The first part of the article concerns the conditions under which a disaggregation of the land surface into subareas of uniform or "quasihomogeneous" behaviour (hydrotopes or hydrological response units - HRUs) is indispensable. In a case study in northern Germany, it is shown that forests in contrast to arable land, areas with shallow groundwater in contrast to those with deep, water surfaces and sealed areas should generally be distinguished (disaggregated) in modelling, whereas internal heterogeneities within these hydrotopes can be assessed statistically, e.g., by areal distribution functions (soil water holding capacity, hydraulic conductivity, etc.). Models with hydrotope-specific parameters can be applied to calculate the "vertical" processes (fluxes, storages, etc.), and this, moreover, for hydrotopes of different area, and even for groups of distributed hydrotopes in a reference area (hydrotope classes), provided that the meteorological conditions are similar. Thus, a scaling problem does not really exist in this process domain. The primary domain for the application of scaling laws is that of lateral flows in landscapes and river basins. This is illustrated in the second part of the article, where results of a case study in Bavaria/Germany are presented and discussed. It is shown that scaling laws can be applied efficiently for the determination of the Instantaneous Unit Hydrograph (IUH) of the surface runoff system in river basins: simple scaling for basins larger than 43 km 2, and multiple scaling for smaller basins. Surprisingly, only two parameters were identified as important in the derived relations: the drainage area and, in some

  6. Regional scale hydrology with a new land surface processes model

    Science.gov (United States)

    Laymon, Charles; Crosson, William

    1995-01-01

    Through the CaPE Hydrometeorology Project, we have developed an understanding of some of the unique data quality issues involved in assimilating data of disparate types for regional-scale hydrologic modeling within a GIS framework. Among others, the issues addressed here include the development of adequate validation of the surface water budget, implementation of the STATSGO soil data set, and implementation of a remote sensing-derived landcover data set to account for surface heterogeneity. A model of land surface processes has been developed and used in studies of the sensitivity of surface fluxes and runoff to soil and landcover characterization. Results of these experiments have raised many questions about how to treat the scale-dependence of land surface-atmosphere interactions on spatial and temporal variability. In light of these questions, additional modifications are being considered for the Marshall Land Surface Processes Model. It is anticipated that these techniques can be tested and applied in conjunction with GCIP activities over regional scales.

  7. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  8. Modeling Biology Spanning Different Scales: An Open Challenge

    Directory of Open Access Journals (Sweden)

    Filippo Castiglione

    2014-01-01

    Full Text Available It is coming nowadays more clear that in order to obtain a unified description of the different mechanisms governing the behavior and causality relations among the various parts of a living system, the development of comprehensive computational and mathematical models at different space and time scales is required. This is one of the most formidable challenges of modern biology characterized by the availability of huge amount of high throughput measurements. In this paper we draw attention to the importance of multiscale modeling in the framework of studies of biological systems in general and of the immune system in particular.

  9. Model Predictive Control for a Small Scale Unmanned Helicopter

    Directory of Open Access Journals (Sweden)

    Jianfu Du

    2008-11-01

    Full Text Available Kinematical and dynamical equations of a small scale unmanned helicoper are presented in the paper. Based on these equations a model predictive control (MPC method is proposed for controlling the helicopter. This novel method allows the direct accounting for the existing time delays which are used to model the dynamics of actuators and aerodynamics of the main rotor. Also the limits of the actuators are taken into the considerations during the controller design. The proposed control algorithm was verified in real flight experiments where good perfomance was shown in postion control mode.

  10. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    Science.gov (United States)

    King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456

  11. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models.

    Science.gov (United States)

    King, Zachary A; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A; Ebrahim, Ali; Palsson, Bernhard O; Lewis, Nathan E

    2016-01-04

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Radar altimetry assimilation in catchment-scale hydrological models

    Science.gov (United States)

    Bauer-Gottwein, P.; Michailovsky, C. I. B.

    2012-04-01

    Satellite-borne radar altimeters provide time series of river and lake levels with global coverage and moderate temporal resolution. Current missions can detect rivers down to a minimum width of about 100m, depending on local conditions around the virtual station. Water level time series from space-borne radar altimeters are an important source of information in ungauged or poorly gauged basins. However, many water resources management applications require information on river discharge. Water levels can be converted into river discharge by means of a rating curve, if sufficient and accurate information on channel geometry, slope and roughness is available. Alternatively, altimetric river levels can be assimilated into catchment-scale hydrological models. The updated models can subsequently be used to produce improved discharge estimates. In this study, a Muskingum routing model for a river network is updated using multiple radar altimetry time series. The routing model is forced with runoff produced by lumped-parameter rainfall-runoff models in each subcatchment. Runoff is uncertain because of errors in the precipitation forcing, structural errors in the rainfall-runoff model as well as uncertain rainfall-runoff model parameters. Altimetric measurements are translated into river reach storage based on river geometry. The Muskingum routing model is forced with a runoff ensemble and storages in the river reaches are updated using a Kalman filter approach. The approach is applied to the Zambezi and Brahmaputra river basins. Assimilation of radar altimetry significantly improves the capability of the models to simulate river discharge.

  13. Using Scaling to Understand, Model and Predict Global Scale Anthropogenic and Natural Climate Change

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.

    2014-12-01

    The atmosphere is variable over twenty orders of magnitude in time (≈10-3 to 1017 s) and almost all of the variance is in the spectral "background" which we show can be divided into five scaling regimes: weather, macroweather, climate, macroclimate and megaclimate. We illustrate this with instrumental and paleo data. Based the signs of the fluctuation exponent H, we argue that while the weather is "what you get" (H>0: fluctuations increasing with scale), that it is macroweather (Hwhite noise and focuses on quasi-periodic variability assumes a spectrum that is in error by a factor of a quadrillion (≈ 1015). Using this scaling framework, we can quantify the natural variability, distinguish it from anthropogenic variability, test various statistical hypotheses and make stochastic climate forecasts. For example, we estimate the probability that the warming is simply a giant century long natural fluctuation is less than 1%, most likely less than 0.1% and estimate return periods for natural warming events of different strengths and durations, including the slow down ("pause") in the warming since 1998. The return period for the pause was found to be 20-50 years i.e. not very unusual; however it immediately follows a 6 year "pre-pause" warming event of almost the same magnitude with a similar return period (30 - 40 years). To improve on these unconditional estimates, we can use scaling models to exploit the long range memory of the climate process to make accurate stochastic forecasts of the climate including the pause. We illustrate stochastic forecasts on monthly and annual scale series of global and northern hemisphere surface temperatures. We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow: for example, using hindcasts we find that at 10 year forecast horizons we can still explain ≈ 15% of the anomaly variance. These scaling hindcasts have comparable - or smaller - RMS errors than existing GCM's. We discuss how these

  14. Reconstructing genome-scale metabolic models with merlin.

    Science.gov (United States)

    Dias, Oscar; Rocha, Miguel; Ferreira, Eugénio C; Rocha, Isabel

    2015-04-30

    The Metabolic Models Reconstruction Using Genome-Scale Information (merlin) tool is a user-friendly Java application that aids the reconstruction of genome-scale metabolic models for any organism that has its genome sequenced. It performs the major steps of the reconstruction process, including the functional genomic annotation of the whole genome and subsequent construction of the portfolio of reactions. Moreover, merlin includes tools for the identification and annotation of genes encoding transport proteins, generating the transport reactions for those carriers. It also performs the compartmentalisation of the model, predicting the organelle localisation of the proteins encoded in the genome and thus the localisation of the metabolites involved in the reactions promoted by such enzymes. The gene-proteins-reactions (GPR) associations are automatically generated and included in the model. Finally, merlin expedites the transition from genomic data to draft metabolic models reconstructions exported in the SBML standard format, allowing the user to have a preliminary view of the biochemical network, which can be manually curated within the environment provided by merlin. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Macro Scale Independently Homogenized Subcells for Modeling Braided Composites

    Science.gov (United States)

    Blinzler, Brina J.; Goldberg, Robert K.; Binienda, Wieslaw K.

    2012-01-01

    An analytical method has been developed to analyze the impact response of triaxially braided carbon fiber composites, including the penetration velocity and impact damage patterns. In the analytical model, the triaxial braid architecture is simulated by using four parallel shell elements, each of which is modeled as a laminated composite. Currently, each shell element is considered to be a smeared homogeneous material. The commercial transient dynamic finite element code LS-DYNA is used to conduct the simulations, and a continuum damage mechanics model internal to LS-DYNA is used as the material constitutive model. To determine the stiffness and strength properties required for the constitutive model, a top-down approach for determining the strength properties is merged with a bottom-up approach for determining the stiffness properties. The top-down portion uses global strengths obtained from macro-scale coupon level testing to characterize the material strengths for each subcell. The bottom-up portion uses micro-scale fiber and matrix stiffness properties to characterize the material stiffness for each subcell. Simulations of quasi-static coupon level tests for several representative composites are conducted along with impact simulations.

  16. Research of Model Scale Seawater Intrusion using Geoelectric Method

    Directory of Open Access Journals (Sweden)

    Supriyadi Supriyadi

    2011-08-01

    Full Text Available A depth experience and knowledge are needed in analyzing the prediction of seawater intrusion. We report here a physical modelling for monitoring the model scale of seawater intrusion. The model used in this research is glass basin consists of two parts; soil and seawater. The intrusion of seawater into soil in the glass basin is modelled. The results of 2-D inversion by using software Res2DInv32 showed that the monitoring of seawater intrusion, in soil model scale, can be detected by using Schlumberger configuration resistivity method. The watering process of freshwater into soil caused the electric resistivity value decreased. This phenomenon can be seen from the transition of the resistivity pseudo section before and after the watering process using different cummulative volume of freshwater in different soil. After being intruded by the seawater, the measured soil resistivity is 2.22 Ωm – 5.69 Ωm which means that the soil had been intruded.

  17. Current state of genome-scale modeling in filamentous fungi.

    Science.gov (United States)

    Brandl, Julian; Andersen, Mikael R

    2015-06-01

    The group of filamentous fungi contains important species used in industrial biotechnology for acid, antibiotics and enzyme production. Their unique lifestyle turns these organisms into a valuable genetic reservoir of new natural products and biomass degrading enzymes that has not been used to full capacity. One of the major bottlenecks in the development of new strains into viable industrial hosts is the alteration of the metabolism towards optimal production. Genome-scale models promise a reduction in the time needed for metabolic engineering by predicting the most potent targets in silico before testing them in vivo. The increasing availability of high quality models and molecular biological tools for manipulating filamentous fungi renders the model-guided engineering of these fungal factories possible with comprehensive metabolic networks. A typical fungal model contains on average 1138 unique metabolic reactions and 1050 ORFs, making them a vast knowledge-base of fungal metabolism. In the present review we focus on the current state as well as potential future applications of genome-scale models in filamentous fungi.

  18. Pore-scale modeling of wettability alteration during primary drainage

    Science.gov (United States)

    Kallel, W.; van Dijke, M. I. J.; Sorbie, K. S.; Wood, R.

    2017-03-01

    While carbonate reservoirs are recognized to be weakly-to-moderately oil-wet at the core-scale, pore-scale wettability distributions remain poorly understood. In particular, the wetting state of micropores (pores polar non-hydrocarbon compounds from the oil-phase into the water-phase. We implement a diffusion/adsorption model for these compounds that triggers a wettability alteration from initially water-wet to intermediate-wet conditions. This mechanism is incorporated in a quasi-static pore-network model to which we add a notional time-dependency of the quasi-static invasion percolation mechanism. The model qualitatively reproduces experimental observations where an early rapid wettability alteration involving these small polar species occurred during primary drainage. Interestingly, we could invoke clear differences in the primary drainage patterns by varying both the extent of wettability alteration and the balance between the processes of oil invasion and wetting change. Combined, these parameters dictate the initial water saturation for waterflooding. Indeed, under conditions where oil invasion is slow compared to a fast and relatively strong wetting change, the model results in significant non-zero water saturations. However, for relatively fast oil invasion or small wetting changes, the model allows higher oil saturations at fixed maximum capillary pressures, and invasion of micropores at moderate capillary pressures.

  19. Modeling and Simulation of a lab-scale Fluidised Bed

    Directory of Open Access Journals (Sweden)

    Britt Halvorsen

    2002-04-01

    Full Text Available The flow behaviour of a lab-scale fluidised bed with a central jet has been simulated. The study has been performed with an in-house computational fluid dynamics (CFD model named FLOTRACS-MP-3D. The CFD model is based on a multi-fluid Eulerian description of the phases, where the kinetic theory for granular flow forms the basis for turbulence modelling of the solid phases. A two-dimensional Cartesian co-ordinate system is used to describe the geometry. This paper discusses whether bubble formation and bed height are influenced by coefficient of restitution, drag model and number of solid phases. Measurements of the same fluidised bed with a digital video camera are performed. Computational results are compared with the experimental results, and the discrepancies are discussed.

  20. Censored rainfall modelling for estimation of fine-scale extremes

    Directory of Open Access Journals (Sweden)

    D. Cross

    2018-01-01

    Full Text Available Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett–Lewis rectangular pulse (BLRP model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett–Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  1. Current state of genome-scale modeling in filamentous fungi

    DEFF Research Database (Denmark)

    Brandl, Julian; Andersen, Mikael Rørdam

    2015-01-01

    The group of filamentous fungi contains important species used in industrial biotechnology for acid, antibiotics and enzyme production. Their unique lifestyle turns these organisms into a valuable genetic reservoir of new natural products and biomass degrading enzymes that has not been used to full...... testing them in vivo. The increasing availability of high quality models and molecular biological tools for manipulating filamentous fungi renders the model-guided engineering of these fungal factories possible with comprehensive metabolic networks. A typical fungal model contains on average 1138 unique...... metabolic reactions and 1050 ORFs, making them a vast knowledge-base of fungal metabolism. In the present review we focus on the current state as well as potential future applications of genome-scale models in filamentous fungi....

  2. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  3. Scaling behavior of an airplane-boarding model.

    Science.gov (United States)

    Brics, Martins; Kaupužs, Jevgenijs; Mahnke, Reinhard

    2013-04-01

    An airplane-boarding model, introduced earlier by Frette and Hemmer [Phys. Rev. E 85, 011130 (2012)], is studied with the aim of determining precisely its asymptotic power-law scaling behavior for a large number of passengers N. Based on Monte Carlo simulation data for very large system sizes up to N=2(16)=65536, we have analyzed numerically the scaling behavior of the mean boarding time and other related quantities. In analogy with critical phenomena, we have used appropriate scaling Ansätze, which include the leading term as some power of N (e.g., [proportionality]N(α) for ), as well as power-law corrections to scaling. Our results clearly show that α=1/2 holds with a very high numerical accuracy (α=0.5001±0.0001). This value deviates essentially from α=/~0.69, obtained earlier by Frette and Hemmer from data within the range 2≤N≤16. Our results confirm the convergence of the effective exponent α(eff)(N) to 1/2 at large N as observed by Bernstein. Our analysis explains this effect. Namely, the effective exponent α(eff)(N) varies from values about 0.7 for small system sizes to the true asymptotic value 1/2 at N→∞ almost linearly in N(-1/3) for large N. This means that the variation is caused by corrections to scaling, the leading correction-to-scaling exponent being θ≈1/3. We have estimated also other exponents: ν=1/2 for the mean number of passengers taking seats simultaneously in one time step, β=1 for the second moment of t(b), and γ≈1/3 for its variance.

  4. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    Science.gov (United States)

    Tartakovsky, G. D.; Tartakovsky, A. M.; Scheibe, T. D.; Fang, Y.; Mahadevan, R.; Lovley, D. R.

    2013-09-01

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model

  5. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    Energy Technology Data Exchange (ETDEWEB)

    Tartakovsky, Guzel D.; Tartakovsky, Alexandre M.; Scheibe, Timothy D.; Fang, Yilin; Mahadevan, Radhakrishnan; Lovley, Derek R.

    2013-09-07

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparisonto prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under

  6. Chronic hyperglycemia affects bone metabolism in adult zebrafish scale model.

    Science.gov (United States)

    Carnovali, Marta; Luzi, Livio; Banfi, Giuseppe; Mariotti, Massimo

    2016-12-01

    Type II diabetes mellitus is a metabolic disease characterized by chronic hyperglycemia that induce other pathologies including diabetic retinopathy and bone disease. The mechanisms implicated in bone alterations induced by type II diabetes mellitus have been debated for years and are not yet clear because there are other factors involved that hide bone mineral density alterations. Despite this, it is well known that chronic hyperglycemia affects bone health causing fragility, mechanical strength reduction and increased propensity of fractures because of impaired bone matrix microstructure and aberrant bone cells function. Adult Danio rerio (zebrafish) represents a powerful model to study glucose and bone metabolism. Then, the aim of this study was to evaluate bone effects of chronic hyperglycemia in a new type II diabetes mellitus zebrafish model created by glucose administration in the water. Fish blood glucose levels have been monitored in time course experiments and basal glycemia was found increased. After 1 month treatment, the morphology of the retinal blood vessels showed abnormalities resembling to the human diabetic retinopathy. The adult bone metabolism has been evaluated in fish using the scales as read-out system. The scales of glucose-treated fish didn't depose new mineralized matrix and shown bone resorption lacunae associated with an intense osteoclast activity. In addition, hyperglycemic fish scales have shown a significant decrease of alkaline phosphatase activity and increase of tartrate-resistant acid phosphatase activity, in association with alterations in other bone-specific markers. These data indicates an imbalance in bone metabolism, which leads to the osteoporotic-like phenotype visualized through scale mineral matrix staining. The zebrafish model of hyperglycemic damage can contribute to elucidate in vivo the molecular mechanisms of metabolic changes, which influence the bone tissues regulation in human diabetic patients.

  7. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  8. Klobuchar-like Ionospheric Model for Different Scales Areas

    Directory of Open Access Journals (Sweden)

    LIU Chen

    2017-05-01

    Full Text Available Nowadays, Klobuchar is the most widely used ionospheric model in the positioning based on single-frequency terminal, and its different refined models have been proposed for a higher and higher accuracy of positioning. The variation of nighttime TEC with local time and the variation of TEC (total electron content with latitude have been analyzed using GIMs. After summarizing the model refinement schemes with wide applications, we proposed a Klobuchar-like model for regions with different scales in this paper. The Klobuchar-like, 14-paramaters Klobuchar and 8-paramaters Klobuchar models were established for the small, large and global regions by GIMs (global ionospheric maps in different solar activity periods and seasons, respectively. Klobuchar-like models, with the correction rates of 92.96%, 91.55% and 72.67% respectively in the small, large and global regions, have higher correction rates than 14-paramaters Klobuchar,8-paramaters Klobuchar and GPS Klobuchar models, which have verified the effectiveness and practicability of Klobuchar-like model.

  9. Electron-scale reduced fluid models with gyroviscous effects

    Science.gov (United States)

    Passot, T.; Sulem, P. L.; Tassi, E.

    2017-08-01

    Reduced fluid models for collisionless plasmas including electron inertia and finite Larmor radius corrections are derived for scales ranging from the ion to the electron gyroradii. Based either on pressure balance or on the incompressibility of the electron fluid, they respectively capture kinetic Alfvén waves (KAWs) or whistler waves (WWs), and can provide suitable tools for reconnection and turbulence studies. Both isothermal regimes and Landau fluid closures permitting anisotropic pressure fluctuations are considered. For small values of the electron beta parameter e$ , a perturbative computation of the gyroviscous force valid at scales comparable to the electron inertial length is performed at order e)$ , which requires second-order contributions in a scale expansion. Comparisons with kinetic theory are performed in the linear regime. The spectrum of transverse magnetic fluctuations for strong and weak turbulence energy cascades is also phenomenologically predicted for both types of waves. In the case of moderate ion to electron temperature ratio, a new regime of KAW turbulence at scales smaller than the electron inertial length is obtained, where the magnetic energy spectrum decays like \\bot -13/3$ , thus faster than the \\bot -11/3$ spectrum of WW turbulence.

  10. The multi-scale aerosol-climate model PNNL-MMF: model description and evaluation

    Directory of Open Access Journals (Sweden)

    M. Wang

    2011-03-01

    Full Text Available Anthropogenic aerosol effects on climate produce one of the largest uncertainties in estimates of radiative forcing of past and future climate change. Much of this uncertainty arises from the multi-scale nature of the interactions between aerosols, clouds and large-scale dynamics, which are difficult to represent in conventional general circulation models (GCMs. In this study, we develop a multi-scale aerosol-climate model that treats aerosols and clouds across different scales, and evaluate the model performance, with a focus on aerosol treatment. This new model is an extension of a multi-scale modeling framework (MMF model that embeds a cloud-resolving model (CRM within each grid column of a GCM. In this extension, the effects of clouds on aerosols are treated by using an explicit-cloud parameterized-pollutant (ECPP approach that links aerosol and chemical processes on the large-scale grid with statistics of cloud properties and processes resolved by the CRM. A two-moment cloud microphysics scheme replaces the simple bulk microphysics scheme in the CRM, and a modal aerosol treatment is included in the GCM. With these extensions, this multi-scale aerosol-climate model allows the explicit simulation of aerosol and chemical processes in both stratiform and convective clouds on a global scale.

    Simulated aerosol budgets in this new model are in the ranges of other model studies. Simulated gas and aerosol concentrations are in reasonable agreement with observations (within a factor of 2 in most cases, although the model underestimates black carbon concentrations at the surface by a factor of 2–4. Simulated aerosol size distributions are in reasonable agreement with observations in the marine boundary layer and in the free troposphere, while the model underestimates the accumulation mode number concentrations near the surface, and overestimates the accumulation mode number concentrations in the middle and upper free troposphere by a factor

  11. Analysis and modeling of scale-invariance in plankton abundance

    CERN Document Server

    Pelletier, J D

    1996-01-01

    The power spectrum, $S$, of horizontal transects of plankton abundance are often observed to have a power-law dependence on wavenumber, $k$, with exponent close to $-2$: $S(k)\\propto k^{-2}$ over a wide range of scales. I present power spectral analyses of aircraft lidar measurements of phytoplankton abundance from scales of 1 to 100 km. A power spectrum $S(k)\\propto k^{-2}$ is obtained. As a model for this observation, I consider a stochastic growth equation where the rate of change of plankton abundance is determined by turbulent mixing, modeled as a diffusion process in two dimensions, and exponential growth with a stochastically variable net growth rate representing a fluctuating environment. The model predicts a lognormal distribution of abundance and a power spectrum of horizontal transects $S(k)\\propto k^{-1.8}$, close to the observed spectrum. The model equation predicts that the power spectrum of variations in abundance in time at a point in space is $S(f)\\propto f^{-1.5}$ (where $f$ is the frequency...

  12. Multi-scale modeling of carbon capture systems

    Energy Technology Data Exchange (ETDEWEB)

    Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO2 capture. The sorbent model includes a detailed treatment of transport and amine-CO2- H2O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.

  13. European Continental Scale Hydrological Model, Limitations and Challenges

    Science.gov (United States)

    Rouholahnejad, E.; Abbaspour, K.

    2014-12-01

    The pressures on water resources due to increasing levels of societal demand, increasing conflict of interest and uncertainties with regard to freshwater availability create challenges for water managers and policymakers in many parts of Europe. At the same time, climate change adds a new level of pressure and uncertainty with regard to freshwater supplies. On the other hand, the small-scale sectoral structure of water management is now reaching its limits. The integrated management of water in basins requires a new level of consideration where water bodies are to be viewed in the context of the whole river system and managed as a unit within their basins. In this research we present the limitations and challenges of modelling the hydrology of the continent Europe. The challenges include: data availability at continental scale and the use of globally available data, streamgauge data quality and their misleading impacts on model calibration, calibration of large-scale distributed model, uncertainty quantification, and computation time. We describe how to avoid over parameterization in calibration process and introduce a parallel processing scheme to overcome high computation time. We used Soil and Water Assessment Tool (SWAT) program as an integrated hydrology and crop growth simulator to model water resources of the Europe continent. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals for the period of 1970-2006. The use of a large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation and provides the overall picture of water resources temporal and spatial distribution across the continent. The calibrated model and results provide information support to the European Water

  14. Systems metabolic engineering: Genome-scale models and beyond

    Science.gov (United States)

    Blazeck, John; Alper, Hal

    2010-01-01

    The advent of high throughput genome-scale bioinformatics has led to an exponential increase in available cellular system data. Systems metabolic engineering attempts to use data-driven approaches – based on the data collected with high throughput technologies – to identify gene targets and optimize phenotypical properties on a systems level. Current systems metabolic engineering tools are limited for predicting and defining complex phenotypes such as chemical tolerances and other global, multigenic traits. The most pragmatic systems-based tool for metabolic engineering to arise is the in silico genome-scale metabolic reconstruction. This tool has seen wide adoption for modeling cell growth and predicting beneficial gene knockouts, and we examine here how this approach can be expanded for novel organisms. This review will highlight advances of the systems metabolic engineering approach with a focus on de novo development and use of genome-scale metabolic reconstructions for metabolic engineering applications. We will then discuss the challenges and prospects for this emerging field to enable model-based metabolic engineering. Specifically, we argue that current state-of-the-art systems metabolic engineering techniques represent a viable first step for improving product yield that still must be followed by combinatorial techniques or random strain mutagenesis to achieve optimal cellular systems. PMID:20151446

  15. Systems metabolic engineering: genome-scale models and beyond.

    Science.gov (United States)

    Blazeck, John; Alper, Hal

    2010-07-01

    The advent of high throughput genome-scale bioinformatics has led to an exponential increase in available cellular system data. Systems metabolic engineering attempts to use data-driven approaches--based on the data collected with high throughput technologies--to identify gene targets and optimize phenotypical properties on a systems level. Current systems metabolic engineering tools are limited for predicting and defining complex phenotypes such as chemical tolerances and other global, multigenic traits. The most pragmatic systems-based tool for metabolic engineering to arise is the in silico genome-scale metabolic reconstruction. This tool has seen wide adoption for modeling cell growth and predicting beneficial gene knockouts, and we examine here how this approach can be expanded for novel organisms. This review will highlight advances of the systems metabolic engineering approach with a focus on de novo development and use of genome-scale metabolic reconstructions for metabolic engineering applications. We will then discuss the challenges and prospects for this emerging field to enable model-based metabolic engineering. Specifically, we argue that current state-of-the-art systems metabolic engineering techniques represent a viable first step for improving product yield that still must be followed by combinatorial techniques or random strain mutagenesis to achieve optimal cellular systems.

  16. From micro-scale 3D simulations to macro-scale model of periodic porous media

    Science.gov (United States)

    Crevacore, Eleonora; Tosco, Tiziana; Marchisio, Daniele; Sethi, Rajandrea; Messina, Francesca

    2015-04-01

    In environmental engineering, the transport of colloidal suspensions in porous media is studied to understand the fate of potentially harmful nano-particles and to design new remediation technologies. In this perspective, averaging techniques applied to micro-scale numerical simulations are a powerful tool to extrapolate accurate macro-scale models. Choosing two simplified packing configurations of soil grains and starting from a single elementary cell (module), it is possible to take advantage of the periodicity of the structures to reduce the computation costs of full 3D simulations. Steady-state flow simulations for incompressible fluid in laminar regime are implemented. Transport simulations are based on the pore-scale advection-diffusion equation, that can be enriched introducing also the Stokes velocity (to consider the gravity effect) and the interception mechanism. Simulations are carried on a domain composed of several elementary modules, that serve as control volumes in a finite volume method for the macro-scale method. The periodicity of the medium involves the periodicity of the flow field and this will be of great importance during the up-scaling procedure, allowing relevant simplifications. Micro-scale numerical data are treated in order to compute the mean concentration (volume and area averages) and fluxes on each module. The simulation results are used to compare the micro-scale averaged equation to the integral form of the macroscopic one, making a distinction between those terms that could be computed exactly and those for which a closure in needed. Of particular interest it is the investigation of the origin of macro-scale terms such as the dispersion and tortuosity, trying to describe them with micro-scale known quantities. Traditionally, to study the colloidal transport many simplifications are introduced, such those concerning ultra-simplified geometry that usually account for a single collector. Gradual removal of such hypothesis leads to a

  17. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  18. A Goddard Multi-Scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2010-01-01

    A multi-scale modeling system with unified physics has been developed at NASA Goddard Space Flight Center (GSFC). The system consists of an MMF, the coupled NASA Goddard finite-volume GCM (fvGCM) and Goddard Cumulus Ensemble model (GCE, a CRM); the state-of-the-art Weather Research and Forecasting model (WRF) and the stand alone GCE. These models can share the same microphysical schemes, radiation (including explicitly calculated cloud optical properties), and surface models that have been developed, improved and tested for different environments. In this talk, I will present: (1) A brief review on GCE model and its applications on the impact of the aerosol on deep precipitation processes, (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications). We are also performing the inline tracer calculation to comprehend the physical processes (i.e., boundary layer and each quadrant in the boundary layer) related to the development and structure of hurricanes and mesoscale convective systems. In addition, high - resolution (spatial. 2km, and temporal, I minute) visualization showing the model results will be presented.

  19. Relating the CMSSM and SUGRA models with GUT scale and Super-GUT scale Supersymmetry Breaking

    CERN Document Server

    Dudas, Emilian; Mustafayev, Azar; Olive, Keith A.

    2012-01-01

    While the constrained minimal supersymmetric standard model (CMSSM) with universal gaugino masses, $m_{1/2}$, scalar masses, $m_0$, and A-terms, $A_0$, defined at some high energy scale (usually taken to be the GUT scale) is motivated by general features of supergravity models, it does not carry all of the constraints imposed by minimal supergravity (mSUGRA). In particular, the CMSSM does not impose a relation between the trilinear and bilinear soft supersymmetry breaking terms, $B_0 = A_0 - m_0$, nor does it impose the relation between the soft scalar masses and the gravitino mass, $m_0 = m_{3/2}$. As a consequence, $\\tan \\beta$ is computed given values of the other CMSSM input parameters. By considering a Giudice-Masiero (GM) extension to mSUGRA, one can introduce new parameters to the K\\"ahler potential which are associated with the Higgs sector and recover many of the standard CMSSM predictions. However, depending on the value of $A_0$, one may have a gravitino or a neutralino dark matter candidate. We al...

  20. Light moduli in almost no-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, Wilfried; Moeller, Jan; Schmidt, Jonas

    2009-09-15

    We discuss the stabilization of the compact dimension for a class of five-dimensional orbifold supergravity models. Supersymmetry is broken by the superpotential on a boundary. Classically, the size L of the fifth dimension is undetermined, with or without supersymmetry breaking, and the effective potential is of no-scale type. The size L is fixed by quantum corrections to the Kaehler potential, the Casimir energy and Fayet-Iliopoulos (FI) terms localized at the boundaries. For an FI scale of order M{sub GUT}, as in heterotic string compactifications with anomalous U(1) symmetries, one obtains L{proportional_to}1/M{sub GUT}. A small mass is predicted for the scalar fluctuation associated with the fifth dimension, m{sub {rho}}

  1. Density Functional Theory and Materials Modeling at Atomistic Length Scales

    Directory of Open Access Journals (Sweden)

    Swapan K. Ghosh

    2002-04-01

    Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.

  2. Next-generation genome-scale models for metabolic engineering.

    Science.gov (United States)

    King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O

    2015-12-01

    Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Transient Recharge Estimability Through Field-Scale Groundwater Model Calibration.

    Science.gov (United States)

    Knowling, Matthew J; Werner, Adrian D

    2017-11-01

    The estimation of recharge through groundwater model calibration is hampered by the nonuniqueness of recharge and aquifer parameter values. It has been shown recently that the estimability of spatially distributed recharge through calibration of steady-state models for practical situations (i.e., real-world, field-scale aquifer settings) is limited by the need for excessive amounts of hydraulic-parameter and groundwater-level data. However, the extent to which temporal recharge variability can be informed through transient model calibration, which involves larger water-level datasets, but requires the additional consideration of storage parameters, is presently unknown for practical situations. In this study, time-varying recharge estimates, inferred through calibration of a field-scale highly parameterized groundwater model, are systematically investigated subject to changes in (1) the degree to which hydraulic parameters including hydraulic conductivity (K) and specific yield (S y ) are constrained, (2) the number of water-level calibration targets, and (3) the temporal resolution (up to monthly time steps) at which recharge is estimated. The analysis involves the use of a synthetic reality (a reference model) based on a groundwater model of Uley South Basin, South Australia. Identifiability statistics are used to evaluate the ability of recharge and hydraulic parameters to be estimated uniquely. Results show that reasonable estimates of monthly recharge (recharge root-mean-squared error) require a considerable amount of transient water-level data, and that the spatial distribution of K is known. Joint estimation of recharge, S y and K, however, precludes reasonable inference of recharge and hydraulic parameter values. We conclude that the estimation of temporal recharge variability through calibration may be impractical for real-world settings. © 2017, National Ground Water Association.

  4. Evaluation of a distributed catchment scale water balance model

    Science.gov (United States)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  5. Device Scale Modeling of Solvent Absorption using MFIX-TFM

    Energy Technology Data Exchange (ETDEWEB)

    Carney, Janine E. [National Energy Technology Lab. (NETL), Albany, OR (United States); Finn, Justin R. [National Energy Technology Lab. (NETL), Albany, OR (United States); Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States)

    2016-10-01

    Recent climate change is largely attributed to greenhouse gases (e.g., carbon dioxide, methane) and fossil fuels account for a large majority of global CO2 emissions. That said, fossil fuels will continue to play a significant role in the generation of power for the foreseeable future. The extent to which CO2 is emitted needs to be reduced, however, carbon capture and sequestration are also necessary actions to tackle climate change. Different approaches exist for CO2 capture including both post-combustion and pre-combustion technologies, oxy-fuel combustion and/or chemical looping combustion. The focus of this effort is on post-combustion solvent-absorption technology. To apply CO2 technologies at commercial scale, the availability and maturity and the potential for scalability of that technology need to be considered. Solvent absorption is a proven technology but not at the scale needed by typical power plant. The scale up and down and design of laboratory and commercial packed bed reactors depends heavily on the specific knowledge of two-phase pressure drop, liquid holdup, the wetting efficiency and mass transfer efficiency as a function of operating conditions. Simple scaling rules often fail to provide proper design. Conventional reactor design modeling approaches will generally characterize complex non-ideal flow and mixing patterns using simplified and/or mechanistic flow assumptions. While there are varying levels of complexity used within these approaches, none of these models resolve the local velocity fields. Consequently, they are unable to account for important design factors such as flow maldistribution and channeling from a fundamental perspective. Ideally design would be aided by development of predictive models based on truer representation of the physical and chemical processes that occur at different scales. Computational fluid dynamic (CFD) models are based on multidimensional flow equations with first

  6. Workshop on Human Activity at Scale in Earth System Models

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Melissa R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Coletti, Mark A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kennedy, Joseph H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nair, Sujithkumar S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Omitaomu, Olufemi A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-01-26

    Changing human activity within a geographical location may have significant influence on the global climate, but that activity must be parameterized in such a way as to allow these high-resolution sub-grid processes to affect global climate within that modeling framework. Additionally, we must have tools that provide decision support and inform local and regional policies regarding mitigation of and adaptation to climate change. The development of next-generation earth system models, that can produce actionable results with minimum uncertainties, depends on understanding global climate change and human activity interactions at policy implementation scales. Unfortunately, at best we currently have only limited schemes for relating high-resolution sectoral emissions to real-time weather, ultimately to become part of larger regions and well-mixed atmosphere. Moreover, even our understanding of meteorological processes at these scales is imperfect. This workshop addresses these shortcomings by providing a forum for discussion of what we know about these processes, what we can model, where we have gaps in these areas and how we can rise to the challenge to fill these gaps.

  7. Modelling biological invasions: Individual to population scales at interfaces

    KAUST Repository

    Belmonte-Beitia, J.

    2013-10-01

    Extracting the population level behaviour of biological systems from that of the individual is critical in understanding dynamics across multiple scales and thus has been the subject of numerous investigations. Here, the influence of spatial heterogeneity in such contexts is explored for interfaces with a separation of the length scales characterising the individual and the interface, a situation that can arise in applications involving cellular modelling. As an illustrative example, we consider cell movement between white and grey matter in the brain which may be relevant in considering the invasive dynamics of glioma. We show that while one can safely neglect intrinsic noise, at least when considering glioma cell invasion, profound differences in population behaviours emerge in the presence of interfaces with only subtle alterations in the dynamics at the individual level. Transport driven by local cell sensing generates predictions of cell accumulations along interfaces where cell motility changes. This behaviour is not predicted with the commonly used Fickian diffusion transport model, but can be extracted from preliminary observations of specific cell lines in recent, novel, cryo-imaging. Consequently, these findings suggest a need to consider the impact of individual behaviour, spatial heterogeneity and especially interfaces in experimental and modelling frameworks of cellular dynamics, for instance in the characterisation of glioma cell motility. © 2013 Elsevier Ltd.

  8. A Dynamic Pore-Scale Model of Imbibition

    DEFF Research Database (Denmark)

    Mogensen, Kristian; Stenby, Erling Halfdan

    1998-01-01

    We present a dynamic pore-scale network model of imbibition, capable of calculating residual oil saturation for any given capillary number, viscosity ratio, contact angle and aspect ratio. Our goal is not to predict the outcome of core floods, but rather to perform a sensitivity analysis of the a......We present a dynamic pore-scale network model of imbibition, capable of calculating residual oil saturation for any given capillary number, viscosity ratio, contact angle and aspect ratio. Our goal is not to predict the outcome of core floods, but rather to perform a sensitivity analysis...... of the above-mentioned parameters, except the viscosity ratio. We find that contact angle, aspect ratio and capillary number all have a significant influence on the competition between piston-like advance, leading to high recovery, and snap-off, causing oil entrapment. Due to enormous CPU-time requirements we...... been entirely inhibited, in agreement with results obtained by Blunt using a quasi-static model. For higher aspect ratios, the effect of rate and contact angle is more pronounced. Many core floods are conducted at capillary numbers in the range 10 to10.6. We believe that the excellent recoveries...

  9. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  10. Upscaling hydraulic conductivity from measurement-scale to model-scale

    Science.gov (United States)

    Gunnink, Jan; Stafleu, Jan; Maljers, Densie; Schokker, Jeroen

    2013-04-01

    The Geological Survey of the Netherlands systematically produces both shallow (uncertainty of the model results to be calculated. One of the parameters that is subsequently assigned to the voxels in the GeoTOP model, is hydraulic conductivity (both horizontal and vertical). Hydraulic conductivities are measured on samples taken from high-quality drillings, which are subjected to falling head hydraulic conductivity tests. Samples are taken for all combinations of lithostratigraphy, facies and lithology that are present in the GeoTOP model. The volume of the samples is orders of magnitude smaller than the volume of a voxel in the GeoTOP model. Apart from that, the heterogeneity that occurs within a voxel is not accounted for in the GeoTOP model, since every voxel gets a single lithology that is deemed representative for the entire voxel To account for both the difference in volume and the within-voxel heterogeneity, an upscaling procedure is developed to produce up-scaled hydraulic conductivities for each GeoTOP voxel. A very fine 3D grid of 0.5 x 0.5 x 0.05 m is created that covers the GeoTOP voxel size (100 x 100 x 0.5 m) plus half of the dimensions of the GeoTOP voxel to counteract undesired edge-effects. It is assumed that the scale of the samples is comparable to the voxel size of this fine grid. For each lithostratigraphy and facies combination the spatial correlation structure (variogram) of the lithological classes is used to create 50 equiprobable distributions of lithology for the fine grid with sequential indicator simulation. Then, for each of the lithology realizations, a hydraulic conductivity is assigned to the simulated lithology class, using Sequential Gaussian Simulation, again with the appropriate variogram This results in 50 3D models of hydraulic conductivities on the fine grid. For each of these hydraulic conductivity models, a hydraulic head difference of 1m between top and bottom of the model is used to calculate the flux at the bottom of the

  11. Leptogenesis in GeV-scale seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Hernández, P.; Kekic, M. [Instituto de Física Corpuscular, Universidad de Valencia and CSIC,Edificio Institutos Investigación, Apt. 22085, Valencia, E-46071 (Spain); López-Pavón, J. [SISSA and INFN Sezione di Trieste,via Bonomea 265, Trieste, 34136 (Italy); Racker, J.; Rius, N. [Instituto de Física Corpuscular, Universidad de Valencia and CSIC,Edificio Institutos Investigación, Apt. 22085, Valencia, E-46071 (Spain)

    2015-10-09

    We revisit the production of leptonic asymmetries in minimal extensions of the Standard Model that can explain neutrino masses, involving extra singlets with Majorana masses in the GeV scale. We study the quantum kinetic equations both analytically, via a perturbative expansion up to third order in the mixing angles, and numerically. The analytical solution allows us to identify the relevant CP invariants, and simplifies the exploration of the parameter space. We find that sizeable lepton asymmetries are compatible with non-degenerate neutrino masses and measurable active-sterile mixings.

  12. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    DEFF Research Database (Denmark)

    Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.

    2016-01-01

    the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads......Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine...

  13. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    OpenAIRE

    Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.; Lichota, P.

    2016-01-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimen...

  14. Evaluation of two pollutant dispersion models over continental scales

    Science.gov (United States)

    Rodriguez, D.; Walker, H.; Klepikova, N.; Kostrikov, A.; Zhuk, Y.

    Two long-range, emergency response models—one based on the particle-in-cell method of pollutant representation (ADPIC/U.S.) the other based on the superposition of Gaussian puffs released periodically in time (EXPRESS/Russia)—are evaluated using perfluorocarbon tracer data from the Across North America Tracer Experiment (ANATEX). The purpose of the study is to assess our current capabilities for simulating continental-scale dispersion processes and to use these assessments as a means to improve our modeling tools. The criteria for judging model performance are based on protocols devised by the Environmental Protection Agency and on other complementary tests. Most of these measures require the formation and analysis of surface concentration footprints (the surface manifestations of tracer clouds, which are sampled over 24-h intervals), whose dimensions, center-of-mass coordinates and integral characteristics provide a basis for comparing observed and calculated concentration distributions. Generally speaking, the plumes associated with the 20 releases of perfluorocarbon (10 each from sources at Glasgow, MT and St. Cloud, MN) in January 1987, are poorly resolved by the sampling network when the source-to-receptor distances are less than about 1000 km. Within this undersampled region, both models chronically overpredict the sampler concentrations. Given this tendency, the computed areas of the surface footprints and their integral concentrations are likewise excessive. When the actual plumes spread out sufficiently for reasonable resolution, the observed ( O) and calculated ( C) footprint areas are usually within a factor of two of one another, thereby suggesting that the models possess some skill in the prediction of long-range diffusion. Deviations in the O and C plume trajectories, as measured by the distances of separation between the plume centroids, are on the other of 125 km d -1 for both models. It appears that the inability of the models to simulate large-scale

  15. Analysis, scale modeling, and full-scale tests of low-level nuclear-waste-drum response to accident environments

    Energy Technology Data Exchange (ETDEWEB)

    Huerta, M.; Lamoreaux, G.H.; Romesberg, L.E.; Yoshimura, H.R.; Joseph, B.J.; May, R.A.

    1983-01-01

    This report describes extensive full-scale and scale-model testing of 55-gallon drums used for shipping low-level radioactive waste materials. The tests conducted include static crush, single-can impact tests, and side impact tests of eight stacked drums. Static crush forces were measured and crush energies calculated. The tests were performed in full-, quarter-, and eighth-scale with different types of waste materials. The full-scale drums were modeled with standard food product cans. The response of the containers is reported in terms of drum deformations and lid behavior. The results of the scale model tests are correlated to the results of the full-scale drums. Two computer techniques for calculating the response of drum stacks are presented. 83 figures, 9 tables.

  16. Diagnostics for stochastic genome-scale modeling via model slicing and debugging.

    Directory of Open Access Journals (Sweden)

    Kevin J Tsai

    Full Text Available Modeling of biological behavior has evolved from simple gene expression plots represented by mathematical equations to genome-scale systems biology networks. However, due to obstacles in complexity and scalability of creating genome-scale models, several biological modelers have turned to programming or scripting languages and away from modeling fundamentals. In doing so, they have traded the ability to have exchangeable, standardized model representation formats, while those that remain true to standardized model representation are faced with challenges in model complexity and analysis. We have developed a model diagnostic methodology inspired by program slicing and debugging and demonstrate the effectiveness of the methodology on a genome-scale metabolic network model published in the BioModels database. The computer-aided identification revealed specific points of interest such as reversibility of reactions, initialization of species amounts, and parameter estimation that improved a candidate cell's adenosine triphosphate production. We then compared the advantages of our methodology over other modeling techniques such as model checking and model reduction. A software application that implements the methodology is available at http://gel.ym.edu.tw/gcs/.

  17. Diagnostics for stochastic genome-scale modeling via model slicing and debugging.

    Science.gov (United States)

    Tsai, Kevin J; Chang, Chuan-Hsiung

    2014-01-01

    Modeling of biological behavior has evolved from simple gene expression plots represented by mathematical equations to genome-scale systems biology networks. However, due to obstacles in complexity and scalability of creating genome-scale models, several biological modelers have turned to programming or scripting languages and away from modeling fundamentals. In doing so, they have traded the ability to have exchangeable, standardized model representation formats, while those that remain true to standardized model representation are faced with challenges in model complexity and analysis. We have developed a model diagnostic methodology inspired by program slicing and debugging and demonstrate the effectiveness of the methodology on a genome-scale metabolic network model published in the BioModels database. The computer-aided identification revealed specific points of interest such as reversibility of reactions, initialization of species amounts, and parameter estimation that improved a candidate cell's adenosine triphosphate production. We then compared the advantages of our methodology over other modeling techniques such as model checking and model reduction. A software application that implements the methodology is available at http://gel.ym.edu.tw/gcs/.

  18. A simple landslide model at a laboratory scale

    Science.gov (United States)

    Atmajati, Elisabeth Dian; Yuliza, Elfi; Habil, Husni; Sadisun, Imam Ahmad; Munir, Muhammad Miftahul; Khairurrijal

    2017-07-01

    Landslide, which is one of the natural disasters that occurs frequently, often causes very adverse effects. Landslide early warning systems, which are installed at prone areas, measure physical parameters closely related to landslides and give warning signals indicating that landslides would occur. To determine the critical values of the measured physical parameters or test the early warning system itself, a laboratory scale model of a rotational landslide was developed. This rotational landslide model had a size of 250×45×40 cm3 and was equipped with soil moisture sensors, accelerometers, and automated measurement system. The soil moisture sensors were used to determine the water content in soil sample. The accelerometers were employed to detect movements in x-, y-, and z-direction. Therefore, the flow and rotational landslides were expected to be modeled and characterized. The developed landslide model could be used to evaluate the effects of slope, soil type, and water seepage on the incidence of landslides. The present experiment showed that the model can show the occurrence of landslides. The presence of water seepage made the slope crack. As the time went by, the crack became bigger. After evaluating the obtained characteristics, the occurred landslide was the flow type. This landslide occurred when the soil sample was in a saturated condition with water. The soil movements in x-, y-, and z-direction were also observed. Further experiments should be performed to realize the rotational landslide.

  19. Modelling of vegetative filter strips in catchment scale erosion control

    Directory of Open Access Journals (Sweden)

    K. RANKINEN

    2008-12-01

    Full Text Available The efficiency of vegetative filter strips to reduce erosion was assessed by simulation modelling in two catchments located in different parts of Finland. The areas of high erosion risk were identified by a Geographical Information System (GIS combining digital spatial data of soil type, land use and field slopes. The efficiency of vegetative filter strips (VFS was assessed by the ICECREAM model, a derivative of the CREAMS model which has been modified and adapted for Finnish conditions. The simulation runs were performed without the filter strips and with strips of 1 m, 3 m and 15 m width. Four soil types and two crops (spring barley, winter wheat were studied. The model assessments for fields without VFS showed that the amount of erosion is clearly dominated by slope gradient. The soil texture had a greater impact on erosion than the crop. The impact of the VFS on erosion reduction was highly variable. These model results were scaled up by combining them to the digital spatial data. The simulated efficiency of the VFS in erosion control in the whole catchment varied from 50 to 89%. A GIS-based erosion risk map of the other study catchment and an identification carried out by manual study using topographical paper maps were evaluated and validated by ground truthing. Both methods were able to identify major erosion risk areas, i.e areas where VFS are particularly necessary. A combination of the GIS and the field method gives the best outcome.

  20. A multi-scale strength model with phase transformation

    Science.gov (United States)

    Barton, N.; Arsenlis, A.; Rhee, M.; Marian, J.; Bernier, J.; Tang, M.; Yang, L.

    2011-06-01

    We present a multi-scale strength model that includes phase transformation. In each phase, strength depends on pressure, strain rate, temperature, and evolving dislocation density descriptors. A donor cell type of approach is used for the transfer of dislocation density between phases. While the shear modulus can be modeled as smooth through the BCC to rhombohedral transformation in vanadium, the multi-phase strength model predicts abrupt changes in the material strength due to changes in dislocation kinetics. In the rhombohedral phase, the dislocation density is decomposed into populations associated with short and long Burgers vectors. Strength model construction employs an information passing paradigm to span from the atomistic level to the continuum level. Simulation methods in the overall hierarchy include density functional theory, molecular statics, molecular dynamics, dislocation dynamics, and continuum based approaches. We demonstrate the behavior of the model through simulations of Rayleigh Taylor instability growth experiments of the type used to assess material strength at high pressure and strain rate. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-ABS-464695).

  1. Simplified scaling model for the THETA-pinch

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, K. J.; Thomson, D. B.

    1982-02-01

    A simple ID scaling model for the fast THETA-pinch was developed and written as a code that would be flexible, inexpensive in computer time, and readily available for use with the Los Alamos explosive-driven high-magnetic-field program. The simplified model uses three successive separate stages: (1) a snowplow-like radial implosion, (2) an idealized resistive annihilation of reverse bias field, and (3) an adiabatic compression stage of a BETA = 1 plasma for which ideal pressure balance is assumed to hold. The code uses one adjustable fitting constant whose value was first determined by comparison with results from the Los Alamos Scylla III, Scyllacita, and Scylla IA THETA-pinches.

  2. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  3. Reconstruction of groundwater depletion using a global scale groundwater model

    Science.gov (United States)

    de Graaf, Inge; van Beek, Rens; Sutanudjaja, Edwin; Wada, Yoshi; Bierkens, Marc

    2015-04-01

    Groundwater forms an integral part of the global hydrological cycle and is the world's largest accessible source of fresh water to satisfy human water needs. It buffers variable recharge rates over time, thereby effectively sustaining river flows in times of drought as well as evaporation in areas with shallow water tables. Moreover, although lateral groundwater flows are often slow, they cross topographic and administrative boundaries at appreciable rates. Despite the importance of groundwater, most global scale hydrological models do not consider surface water-groundwater interactions or include a lateral groundwater flow component. The main reason of this omission is the lack of consistent global-scale hydrogeological information needed to arrive at a more realistic representation of the groundwater system, i.e. including information on aquifer depths and the presence of confining layers. The latter holds vital information on the accessibility and quality of the global groundwater resource. In this study we developed a high resolution (5 arc-minutes) global scale transient groundwater model comprising confined and unconfined aquifers. This model is based on MODFLOW (McDonald and Harbaugh, 1988) and coupled with the land-surface model PCR GLOBWB (van Beek et al., 2011) via recharge and surface water levels. Aquifers properties were based on newly derived estimates of aquifer depths (de Graaf et al., 2014b) and thickness of confining layers from an integration of lithological and topographical information. They were further parameterized using available global datasets on lithology (Hartmann and Moosdorf, 2011) and permeability (Gleeson et al., 2014). In a sensitivity analysis the model was run with various hydrogeological parameter settings, under natural recharge only. Scenarios of past groundwater abstractions and corresponding recharge (Wada et al., 2012, de Graaf et al. 2014a) were evaluated. The resulting estimates of groundwater depletion are lower than

  4. HD Hydrological modelling at catchment scale using rainfall radar observations

    Science.gov (United States)

    Ciampalini Rossano. Ciampalini@Gmail. Com), Rossano; Follain, Stéphane; Raclot, Damien; Crabit, Armand; Pastor, Amandine; Augas, Julien; Moussa, Roger; Colin, François; Le Bissonnais, Yves

    2017-04-01

    Hydrological simulations at catchment scale repose on the quality and data availability both for soil and rainfall data. Soil data are quite easy to be collected, although their quality depends on the resources devoted to this task, rainfall data observations, instead, need further effort because of their spatiotemporal variability. Rainfalls are normally recorded with rain gauges located in the catchment, they can provide detailed temporal data, but, the representativeness is limited to the point where the data are collected. Combining different gauges in space can provide a better representation of the rainfall event but the spatialization is often the main obstacle to obtain data close to the reality. Since several years, radar observations overcome this gap providing continuous data registration, that, when properly calibrated, can offer an adequate, continuous, cover in space and time for medium-wide catchments. Here, we use radar records for the south of the France on the La Peyne catchment with the protocol there adopted by the national meteo agency, with resolution of 1 km space and 5' time scale observations. We present here the realisation of a model able to perform from rainfall radar observations, continuous hydrological and soil erosion simulations. The model is semi-theoretically based, once it simulates water fluxes (infiltration-excess overland flow, saturation overland flow, infiltration and channel routing) with a cinematic wave using the St. Venant equation on a simplified "bucket" conceptual model for ground water, and, an empirical representation of sediment load as adopted in models such as STREAM-LANDSOIL (Cerdan et al., 2002, Ciampalini et al., 2012). The advantage of this approach is to furnish a dynamic representation - simulation of the rainfall-runoff events more easily than using spatialized rainfalls from meteo stations and to offer a new look on the spatial component of the events.

  5. Health Literacy Scale and Causal Model of Childhood Overweight.

    Science.gov (United States)

    Intarakamhang, Ungsinun; Intarakamhang, Patrawut

    2017-01-28

    WHO focuses on developing health literacy (HL) referring to cognitive and social skills. Our objectives were to develop a scale for evaluating the HL level of Thai childhood overweight, and develop a path model of health behavior (HB) for preventing obesity. A cross-sectional study. This research used a mixed method. Overall, 2,000 school students were aged 9 to 14 yr collected by stratified random sampling from all parts of Thailand in 2014. Data were analyzed by CFA, LISREL. Reliability of HL and HB scale ranged 0.62 to 0.82 and factor loading ranged 0.33 to 0.80, the subjects had low level of HL (60.0%) and fair level of HB (58.4%), and the path model of HB, could be influenced by HL from three paths. Path 1 started from the health knowledge and understanding that directly influenced the eating behavior (effect sized - β was 0.13, Pliteracy, and making appropriate health-related decision β=0.07, 0.98, and 0.05, respectively. Path 3 the accessing the information and services that influenced communicating for added skills, media literacy, and making appropriate health-related decision β=0.63, 0.93, 0.98, and 0.05. Finally, basic level of HL measured from health knowledge and understanding and accessing the information and services that influenced HB through interactive, and critical level β= 0.76, 0.97, and 0.55, respectively. HL Scale for Thai childhood overweight should be implemented as a screening tool developing HL by the public policy for health promotion.

  6. Multi-scale modelling for HEDP experiments on Orion

    Science.gov (United States)

    Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.

    2016-05-01

    The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.

  7. A small-scale anatomical dosimetry model of the liver

    Science.gov (United States)

    Stenvall, Anna; Larsson, Erik; Strand, Sven-Erik; Jönsson, Bo-Anders

    2014-07-01

    Radionuclide therapy is a growing and promising approach for treating and prolonging the lives of patients with cancer. For therapies where high activities are administered, the liver can become a dose-limiting organ; often with a complex, non-uniform activity distribution and resulting non-uniform absorbed-dose distribution. This paper therefore presents a small-scale dosimetry model for various source-target combinations within the human liver microarchitecture. Using Monte Carlo simulations, Medical Internal Radiation Dose formalism-compatible specific absorbed fractions were calculated for monoenergetic electrons; photons; alpha particles; and 125I, 90Y, 211At, 99mTc, 111In, 177Lu, 131I and 18F. S values and the ratio of local absorbed dose to the whole-organ average absorbed dose was calculated, enabling a transformation of dosimetry calculations from macro- to microstructure level. For heterogeneous activity distributions, for example uptake in Kupffer cells of radionuclides emitting low-energy electrons (125I) or high-LET alpha particles (211At) the target absorbed dose for the part of the space of Disse, closest to the source, was more than eight- and five-fold the average absorbed dose to the liver, respectively. With the increasing interest in radionuclide therapy of the liver, the presented model is an applicable tool for small-scale liver dosimetry in order to study detailed dose-effect relationships in the liver.

  8. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  9. Fine Scale Projections of Indian Monsoonal Rainfall Using Statistical Models

    Science.gov (United States)

    Kulkarni, S.; Ghosh, S.; Rajendran, K.

    2012-12-01

    years of Indian precipitation pattern. The reason behind the failure of bias corrected model in projecting spatially non-uniform precipitation is the inability of the GCMs in modeling finer scale geophysical processes in changed condition. The results highlight the need to revisit the bias correction methods for future projections, to incorporate of finer scale processes.

  10. Application of computer-aided multi-scale modelling framework - Aerosol case study

    DEFF Research Database (Denmark)

    Heitzig, Martina; Gregson, Christopher; Sin, Gürkan

    2011-01-01

    A computer-aided modelling tool for efficient multi-scale modelling has been developed and is applied to solve a multi-scale modelling problem related to design and evaluation of fragrance aerosol products. The developed modelling scenario spans three length scales and describes how droplets...

  11. A rate-dependent multi-scale crack model for concrete

    NARCIS (Netherlands)

    Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.

    2013-01-01

    A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate

  12. Extension of landscape-based population viability models to ecoregional scales for conservation planning

    Science.gov (United States)

    Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh

    2011-01-01

    Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...

  13. A hybrid pore-scale and continuum-scale model for solute diffusion, reaction, and biofilm development in porous media

    Science.gov (United States)

    Tang, Youneng; Valocchi, Albert J.; Werth, Charles J.

    2015-03-01

    It is a challenge to upscale solute transport in porous media for multispecies bio-kinetic reactions because of incomplete mixing within the elementary volume and because biofilm growth can change porosity and affect pore-scale flow and diffusion. To address this challenge, we present a hybrid model that couples pore-scale subdomains to continuum-scale subdomains. While the pore-scale subdomains involving significant biofilm growth and reaction are simulated using pore-scale equations, the other subdomains are simulated using continuum-scale equations to save computational time. The pore-scale and continuum-scale subdomains are coupled using a mortar method to ensure continuity of solute concentration and flux at the interfaces. We present results for a simplified two-dimensional system, neglect advection, and use dual Monod kinetics for solute utilization and biofilm growth. The results based on the hybrid model are consistent with the results based on a pore-scale model for three test cases that cover a wide range of Damköhler (Da = reaction rate/diffusion rate) numbers for both homogeneous (spatially periodic) and heterogeneous pore structures. We compare results from the hybrid method with an upscaled continuum model and show that the latter is valid only for cases of small Damköhler numbers, consistent with other results reported in the literature.

  14. Stainless steel corrosion scale formed in reclaimed water: Characteristics, model for scale growth and metal element release.

    Science.gov (United States)

    Cui, Yong; Liu, Shuming; Smith, Kate; Hu, Hongying; Tang, Fusheng; Li, Yuhong; Yu, Kanghua

    2016-10-01

    Stainless steels generally have extremely good corrosion resistance, but are still susceptible to pitting corrosion. As a result, corrosion scales can form on the surface of stainless steel after extended exposure to aggressive aqueous environments. Corrosion scales play an important role in affecting water quality. These research results showed that interior regions of stainless steel corrosion scales have a high percentage of chromium phases. We reveal the morphology, micro-structure and physicochemical characteristics of stainless steel corrosion scales. Stainless steel corrosion scale is identified as a podiform chromite deposit according to these characteristics, which is unlike deposit formed during iron corrosion. A conceptual model to explain the formation and growth of stainless steel corrosion scale is proposed based on its composition and structure. The scale growth process involves pitting corrosion on the stainless steel surface and the consecutive generation and homogeneous deposition of corrosion products, which is governed by a series of chemical and electrochemical reactions. This model shows the role of corrosion scales in the mechanism of iron and chromium release from pitting corroded stainless steel materials. The formation of corrosion scale is strongly related to water quality parameters. The presence of HClO results in higher ferric content inside the scales. Cl- and SO42- ions in reclaimed water play an important role in corrosion pitting of stainless steel and promote the formation of scales. Copyright © 2016. Published by Elsevier B.V.

  15. Global fits of GUT-scale SUSY models with GAMBIT

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  16. Site-scale groundwater flow modelling of Ceberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1999-06-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracturezones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of {epsilon}{sub f} 10{sup -4} and a flow-wetted surface area of a{sub r} = 0.1 m{sup 2}/(m{sup 3} rock): The median travel time is 1720 years. The median canister flux is 3.27x10{sup -5} m/year. The median F-ratio is 1.72x10{sup 6} years/m. The base case and the deterministic variant suggest that the variability of the travel times within

  17. Impact of Scattering Model on Disdrometer Derived Attenuation Scaling

    Science.gov (United States)

    Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo (Compiler)

    2016-01-01

    NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 GHz attenuation from the disdrometer and the 20 GHz timeseries as well as to directly measure the 40 GHz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data.In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.

  18. Derivation of a GIS-based watershed-scale conceptual model for the St. Jones River Delaware from habitat-scale conceptual models.

    Science.gov (United States)

    Reiter, Michael A; Saintil, Max; Yang, Ziming; Pokrajac, Dragoljub

    2009-08-01

    Conceptual modeling is a useful tool for identifying pathways between drivers, stressors, Valued Ecosystem Components (VECs), and services that are central to understanding how an ecosystem operates. The St. Jones River watershed, DE is a complex ecosystem, and because management decisions must include ecological, social, political, and economic considerations, a conceptual model is a good tool for accommodating the full range of inputs. In 2002, a Four-Component, Level 1 conceptual model was formed for the key habitats of the St. Jones River watershed, but since the habitat level of resolution is too fine for some important watershed-scale issues we developed a functional watershed-scale model using the existing narrowed habitat-scale models. The narrowed habitat-scale conceptual models and associated matrices developed by Reiter et al. (2006) were combined with data from the 2002 land use/land cover (LULC) GIS-based maps of Kent County in Delaware to assemble a diagrammatic and numerical watershed-scale conceptual model incorporating the calculated weight of each habitat within the watershed. The numerical component of the assembled watershed model was subsequently subjected to the same Monte Carlo narrowing methodology used for the habitat versions to refine the diagrammatic component of the watershed-scale model. The narrowed numerical representation of the model was used to generate forecasts for changes in the parameters "Agriculture" and "Forest", showing that land use changes in these habitats propagated through the results of the model by the weighting factor. Also, the narrowed watershed-scale conceptual model identified some key parameters upon which to focus research attention and management decisions at the watershed scale. The forecast and simulation results seemed to indicate that the watershed-scale conceptual model does lead to different conclusions than the habitat-scale conceptual models for some issues at the larger watershed scale.

  19. Modelling catchment non-stationarity - multi-scale modelling and data assimilation

    Science.gov (United States)

    Wheater, H. S.; Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.

    2012-12-01

    Modelling environmental change is in many senses a 'Grand Challenge' for hydrology, but poses major methodological challenges for hydrological models. Conceptual models represent complex processes in a simplified and spatially aggregated manner; typically parameters have no direct relationship to measurable physical properties. Calibration using observed data results in parameter equifinality, unless highly parsimonious model structures are employed. Use of such models to simulate effects of catchment non-stationarity is essentially speculative, unless attention is given to the analysis of parameter temporal variability in a non-stationary observation record. Black-box models are similarly constrained by the information content of the observational data. In contrast, distributed physics-based models provide a stronger theoretical basis for the prediction of change. However, while such models have parameters that are in principle measurable, in practice, for catchment-scale application, the measurement scale is inconsistent with the scale of model representation, the costs associated with such an exercise are high, and key properties are spatially variable, often strongly non-linear, and highly uncertain. In this paper we present a framework for modelling catchment non-stationarity that integrates information (with uncertainty) from multiple models and data sources. The context is the need to model the effects of agricultural land use change at multiple scales. A detailed UK multi-scale and multi-site experimental programme has provided data to support high resolution physics-based models of runoff processes that can, for example, represent the effects of soil structural change (due to grazing densities or trafficking), localised tree planting and drainage. Such models necessarily have high spatial resolution (1m in the horizontal plane, 1 cm in the vertical in this case), and hence can be applied at the scale of a field or hillslope element, but would be

  20. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  1. Simulation of Acoustics for Ares I Scale Model Acoustic Tests

    Science.gov (United States)

    Putnam, Gabriel; Strutzenberg, Louise L.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity acoustic measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. To take advantage of this data, a digital representation of the ASMAT test setup has been constructed and test firings of the motor have been simulated using the Loci/CHEM computational fluid dynamics software. Results from ASMAT simulations with the rocket in both held down and elevated configurations, as well as with and without water suppression have been compared to acoustic data collected from similar live-fire tests. Results of acoustic comparisons have shown good correlation with the amplitude and temporal shape of pressure features and reasonable spectral accuracy up to approximately 1000 Hz. Major plume and acoustic features have been well captured including the plume shock structure, the igniter pulse transient, and the ignition overpressure.

  2. URBAN MORPHOLOGY FOR HOUSTON TO DRIVE MODELS-3/CMAQ AT NEIGHBORHOOD SCALES

    Science.gov (United States)

    Air quality simulation models applied at various horizontal scales require different degrees of treatment in the specifications of the underlying surfaces. As we model neighborhood scales ( 1 km horizontal grid spacing), the representation of urban morphological structures (e....

  3. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties ...

  4. Macro and micro-scale modeling of polyurethane foaming processes

    Science.gov (United States)

    Geier, S.; Piesche, M.

    2014-05-01

    Mold filling processes of refrigerators, car dashboards or steering wheels are some of the many application areas of polyurethane foams. The design of these processes still mainly relies on empirical approaches. Therefore, we first developed a modeling approach describing mold filling processes in complex geometries. Hence, it is possible to study macroscopic foam flow and to identify voids. The final properties of polyurethane foams may vary significantly depending on the location within a product. Additionally, the local foam structure influences foam properties like thermal conductivity or impact strength significantly. It is neither possible nor would it be efficient to model complex geometries completely on bubble scale. For this reason, we developed a modeling approach describing the bubble growth and the evolution of the foam structure for a limited number of bubbles in a representative volume. Finally, we coupled our two simulation approaches by introducing tracer particles into our mold filling simulations. Through this coupling, a basis for studying the evolution of the local foam structure in complex geometries is provided.

  5. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  6. Scaling exponents in space plasmas: a fractional Levy model

    Science.gov (United States)

    Watkins, N. W.; Credgington, D.; Hnat, B.; Chapman, S. C.; Freeman, M. P.; Greenhough, J.

    Mandelbrot introduced the concept of fractals to describe the non-Euclidean shape of many aspects of the natural world In the time series context he proposed the use of fractional Brownian motion fBm to model non-negligible temporal persistence the Joseph Effect and Levy flights to quantify large discontinuities the Noah Effect In space physics the effects are manifested as intermittency and long-range correlation well-established features of geomagnetic indices and their solar wind drivers In order to capture and quantify the Noah and Joseph effects in one compact model we propose the application of a bridge -fractional Levy motion fLm -to space physics We perform an initial evaluation of some previous scaling results in this paradigm and show how fLm can model the previously observed exponents physics 0509058 in press Space Science Reviews We discuss the similarities and differences between fLm and ambivalent processes based on fractional kinetic equations e g Brockmann et al Nature 2006 and suggest some new directions for the future

  7. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  8. A Plume Scale Model of Chlorinated Ethene Degradation

    DEFF Research Database (Denmark)

    Murray, Alexandra Marie; Broholm, Mette Martina; Badin, Alice

    Although much is known about the biotic degradation pathways of chlorinated solvents, application of the degradation mechanism at the field scale is still challenging [1]. There are many microbial kinetic models to describe the reductive dechlorination in soil and groundwater, however none of them...... leaked from a dry cleaning facility, and a 2 km plume extends from the source in an unconfined aquifer of homogenous fluvio-glacial sand. The area has significant iron deposits, most notably pyrite, which can abiotically degrade chlorinated ethenes. The source zone underwent thermal (steam) remediation...... in 2006; the plume has received no treatment. The evolution of the site has been intensely documented since before the source treatment. This includes microbial analysis – Dehalococcoides sp. and vcrA genes have been identified and quantified by qPCR – and dual carbon-chlorine isotope analysis [1...

  9. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  10. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    Science.gov (United States)

    Rogowski, K.; Hansen, M. O. L.; Maroński, R.; Lichota, P.

    2016-09-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads and wake velocity profiles behind the rotor are compared with experimental data taken from literature. The level of agreement between CFD and experimental results is reasonable.

  11. A methodology for ecosystem-scale modeling of selenium

    Science.gov (United States)

    Presser, T.S.; Luoma, S.N.

    2010-01-01

    The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determinehow Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure

  12. Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate-Scale Hydrodynamic Model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Zhaoqing; Khangaonkar, Tarang; Labiosa, Rochelle G.; Kim, Taeyun

    2010-11-30

    The Washington State Department of Ecology contracted with Pacific Northwest National Laboratory to develop an intermediate-scale hydrodynamic and water quality model to study dissolved oxygen and nutrient dynamics in Puget Sound and to help define potential Puget Sound-wide nutrient management strategies and decisions. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or dominate human impacts to dissolved oxygen levels in the sensitive areas. In this study, an intermediate-scale hydrodynamic model of Puget Sound was developed to simulate the hydrodynamics of Puget Sound and the Northwest Straits for the year 2006. The model was constructed using the unstructured Finite Volume Coastal Ocean Model. The overall model grid resolution within Puget Sound in its present configuration is about 880 m. The model was driven by tides, river inflows, and meteorological forcing (wind and net heat flux) and simulated tidal circulations, temperature, and salinity distributions in Puget Sound. The model was validated against observed data of water surface elevation, velocity, temperature, and salinity at various stations within the study domain. Model validation indicated that the model simulates tidal elevations and currents in Puget Sound well and reproduces the general patterns of the temperature and salinity distributions.

  13. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  14. Land surface evapotranspiration modelling at the regional scale

    Science.gov (United States)

    Raffelli, Giulia; Ferraris, Stefano; Canone, Davide; Previati, Maurizio; Gisolo, Davide; Provenzale, Antonello

    2017-04-01

    Climate change has relevant implications for the environment, water resources and human life in general. The observed increment of mean air temperature, in addition to a more frequent occurrence of extreme events such as droughts, may have a severe effect on the hydrological cycle. Besides climate change, land use changes are assumed to be another relevant component of global change in terms of impacts on terrestrial ecosystems: socio-economic changes have led to conversions between meadows and pastures and in most cases to a complete abandonment of grasslands. Water is subject to different physical processes among which evapotranspiration (ET) is one of the most significant. In fact, ET plays a key role in estimating crop growth, water demand and irrigation water management, so estimating values of ET can be crucial for water resource planning, irrigation requirement and agricultural production. Potential evapotranspiration (PET) is the amount of evaporation that occurs when a sufficient water source is available. It can be estimated just knowing temperatures (mean, maximum and minimum) and solar radiation. Actual evapotranspiration (AET) is instead the real quantity of water which is consumed by soil and vegetation; it is obtained as a fraction of PET. The aim of this work was to apply a simplified hydrological model to calculate AET for the province of Turin (Italy) in order to assess the water content and estimate the groundwater recharge at a regional scale. The soil is seen as a bucket (FAO56 model, Allen et al., 1998) made of different layers, which interact with water and vegetation. The water balance is given by precipitations (both rain and snow) and dew as positive inputs, while AET, runoff and drainage represent the rate of water escaping from soil. The difference between inputs and outputs is the water stock. Model data inputs are: soil characteristics (percentage of clay, silt, sand, rocks and organic matter); soil depth; the wilting point (i.e. the

  15. Consistency between hydrological models and field observations: Linking processes at the hillslope scale to hydrological responses at the watershed scale

    Science.gov (United States)

    Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.

    2009-01-01

    The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.

  16. Impact of Spatial Scale on Calibration and Model Output for a Grid-based SWAT Model

    Science.gov (United States)

    Pignotti, G.; Vema, V. K.; Rathjens, H.; Raj, C.; Her, Y.; Chaubey, I.; Crawford, M. M.

    2014-12-01

    The traditional implementation of the Soil and Water Assessment Tool (SWAT) model utilizes common landscape characteristics known as hydrologic response units (HRUs). Discretization into HRUs provides a simple, computationally efficient framework for simulation, but also represents a significant limitation of the model as spatial connectivity between HRUs is ignored. SWATgrid, a newly developed, distributed version of SWAT, provides modified landscape routing via a grid, overcoming these limitations. However, the current implementation of SWATgrid has significant computational overhead, which effectively precludes traditional calibration and limits the total number of grid cells in a given modeling scenario. Moreover, as SWATgrid is a relatively new modeling approach, it remains largely untested with little understanding of the impact of spatial resolution on model output. The objective of this study was to determine the effects of user-defined input resolution on SWATgrid predictions in the Upper Cedar Creek Watershed (near Auburn, IN, USA). Original input data, nominally at 30 m resolution, was rescaled for a range of resolutions between 30 and 4,000 m. A 30 m traditional SWAT model was developed as the baseline for model comparison. Monthly calibration was performed, and the calibrated parameter set was then transferred to all other SWAT and SWATgrid models to focus the effects of resolution on prediction uncertainty relative to the baseline. Model output was evaluated with respect to stream flow at the outlet and water quality parameters. Additionally, output of SWATgrid models were compared to output of traditional SWAT models at each resolution, utilizing the same scaled input data. A secondary objective considered the effect of scale on calibrated parameter values, where each standard SWAT model was calibrated independently, and parameters were transferred to SWATgrid models at equivalent scales. For each model, computational requirements were evaluated

  17. Development and testing of watershed-scale models for poorly drained soils

    Science.gov (United States)

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2005-01-01

    Watershed-scale hydrology and water quality models were used to evaluate the crrmulative impacts of land use and management practices on dowrzstream hydrology and nitrogen loading of poorly drained watersheds. Field-scale hydrology and nutrient dyyrutmics are predicted by DRAINMOD in both models. In the first model (DRAINMOD-DUFLOW), field-scale predictions are coupled...

  18. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  19. Modeling coastal upwelling around a small-scale coastline promontory

    Science.gov (United States)

    Haas, K. A.; Cai, D.; Freismuth, T. M.; MacMahan, J.; Di Lorenzo, E.; Suanda, S. H.; Kumar, N.; Miller, A. J.; Edwards, C. A.

    2016-12-01

    On the US west coast, northerly winds drive coastal ocean upwelling, an important process which brings cold nutrient rich water to the nearshore. The coastline geometry has been shown to be a significant factor in the strength of the upwelling process. In particular, the upwelling in the lee of major headlands have been shown to be enhanced. Recent observations from the Pt. Sal region on the coast of southern California have shown the presence of cooler water south of a small (350 m) rocky promontory (Mussel Pt.) during upwelling events. The hypothesis is that the small scale promontory is creating a lee side enhancement to the upwelling. To shed some light on this process, numerical simulations of the inner shelf region centered about Pt. Sal are conducted with the ROMS module of the COAWST model system. The model system is configured with four nested grids with resolutions ranging from approximately 600 m to the outer shelf ( 200 m) to the inner shelf ( 66 m) and finally to the surf zone ( 22 m). A solution from a 1 km grid encompassing our domain provides the boundary conditions for the 600 m grid. Barotropic tidal forcing is incorporated at the 600 m grid to provide tidal variability. This model system with realistic topography and bathymetry, winds and tides, is able to isolate the forcing mechanisms that explain the emergence of the cold water mass. The simulations focus on the time period of June - July, 2015 corresponding to the pilot study in which observational experiment data was collected. The experiment data in part consists of in situ measurement, which includes mooring with conductivity, temperature, depth, and flow velocity. The model simulations are able to reproduce the important flow features including the cooler water mass south of Mussel Pt. As hypothesized, the strength of the upwelling is enhanced on the side of Mussel Pt. In addition, periods of wind relaxation where the upwelling ceases and even begins to transform towards downwelling is

  20. Air scaling and modeling studies for the 1/5-scale mark I boiling water reactor pressure suppression experiment

    Energy Technology Data Exchange (ETDEWEB)

    Lai, W.; McCauley, E.W.

    1978-01-04

    Results of table-top model experiments performed to investigate pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peach Bottom Mark I boiling water reactor containment system guided subsequent conduct of the 1/5-scale torus experiment and provided new insight into the vertical load function (VLF). Pool dynamics results were qualitatively correct. Experiments with a 1/64-scale fully modeled drywell and torus showed that a 90/sup 0/ torus sector was adequate to reveal three-dimensional effects; the 1/5-scale torus experiment confirmed this.

  1. Modeling Subgrid Scale Droplet Deposition in Multiphase-CFD

    Science.gov (United States)

    Agostinelli, Giulia; Baglietto, Emilio

    2017-11-01

    The development of first-principle-based constitutive equations for the Eulerian-Eulerian CFD modeling of annular flow is a major priority to extend the applicability of multiphase CFD (M-CFD) across all two-phase flow regimes. Two key mechanisms need to be incorporated in the M-CFD framework, the entrainment of droplets from the liquid film, and their deposition. Here we focus first on the aspect of deposition leveraging a separate effects approach. Current two-field methods in M-CFD do not include appropriate local closures to describe the deposition of droplets in annular flow conditions. As many integral correlations for deposition have been proposed for lumped parameters methods applications, few attempts exist in literature to extend their applicability to CFD simulations. The integral nature of the approach limits its applicability to fully developed flow conditions, without geometrical or flow variations, therefore negating the scope of CFD application. A new approach is proposed here that leverages local quantities to predict the subgrid-scale deposition rate. The methodology is first tested into a three-field approach CFD model.

  2. Upscaling of U(VI) Desorption and Transport from Decimeter-Scale Heterogeneity to Plume-Scale Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, Gary P. [U.S. Geological Survey, Menlo Park, CA (United States); Kohler, Matthias [U.S. Geological Survey, Menlo Park, CA (United States); Kannappan, Ramakrishnan [U.S. Geological Survey, Menlo Park, CA (United States); Briggs, Martin [U.S. Geological Survey, Menlo Park, CA (United States); Day-Lewis, Fred [U.S. Geological Survey, Menlo Park, CA (United States)

    2015-02-24

    Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research were to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.

  3. On a class of scaling FRW cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Cataldo, Mauricio [Departamento de Física, Universidad del Bío-Bío, Avenida Collao 1202, Casilla 5-C, Concepción (Chile); Arevalo, Fabiola; Minning, Paul, E-mail: mcataldo@ubiobio.cl, E-mail: pminning@udec.cl, E-mail: farevalo@udec.cl [Departamento de Física, Universidad de Concepción, Casilla 160-C, Concepción (Chile)

    2010-02-01

    We study Friedmann-Robertson-Walker cosmological models with matter content composed of two perfect fluids ρ{sub 1} and ρ{sub 2}, with barotropic pressure densities p{sub 1}/ρ{sub 1} = ω{sub 1} = const and p{sub 2}/ρ{sub 2} = ω{sub 2} = const, where one of the energy densities is given by ρ{sub 1} = C{sub 1}a{sup α}+C{sub 2}a{sup β}, with C{sub 1}, C{sub 2}, α and β taking constant values. We solve the field equations by using the conservation equation without breaking it into two interacting parts with the help of a coupling interacting term Q. Nevertheless, with the found solution may be associated an interacting term Q, and then a number of cosmological interacting models studied in the literature correspond to particular cases of our cosmological model. Specifically those models having constant coupling parameters α-tilde , β-tilde and interacting terms given by Q = α-tilde Hρ{sub D{sub M}}, Q = α-tilde Hρ{sub D{sub E}}, Q = α-tilde H(ρ{sub D{sub M}}+ρ{sub D{sub E}}) and Q = α-tilde Hρ{sub D{sub M}}+β-tilde Hρ{sub D{sub E}}, where ρ{sub D{sub M}} and ρ{sub D{sub E}} are the energy densities of dark matter and dark energy respectively. The studied set of solutions contains a class of cosmological models presenting a scaling behavior at early and at late times. On the other hand the two-fluid cosmological models considered in this paper also permit a three fluid interpretation which is also discussed. In this reinterpretation, for flat Friedmann-Robertson-Walker cosmologies, the requirement of positivity of energy densities of the dark matter and dark energy components allows the state parameter of dark energy to be in the range −1.37∼<ω{sub D{sub E}} < −1/3.

  4. SMR Re-Scaling and Modeling for Load Following Studies

    Energy Technology Data Exchange (ETDEWEB)

    Hoover, K.; Wu, Q.; Bragg-Sitton, S.

    2016-11-01

    This study investigates the creation of a new set of scaling parameters for the Oregon State University Multi-Application Small Light Water Reactor (MASLWR) scaled thermal hydraulic test facility. As part of a study being undertaken by Idaho National Lab involving nuclear reactor load following characteristics, full power operations need to be simulated, and therefore properly scaled. Presented here is the scaling analysis and plans for RELAP5-3D simulation.

  5. Meso-scale modeling of irradiated concrete in test reactor

    Energy Technology Data Exchange (ETDEWEB)

    Giorla, A. [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Vaitová, M. [Czech Technical University, Thakurova 7, 166 29 Praha 6 (Czech Republic); Le Pape, Y., E-mail: lepapeym@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Štemberk, P. [Czech Technical University, Thakurova 7, 166 29 Praha 6 (Czech Republic)

    2015-12-15

    Highlights: • A meso-scale finite element model for irradiated concrete is developed. • Neutron radiation-induced volumetric expansion is a predominant degradation mode. • Confrontation with expansion and damage obtained from experiments is successful. • Effects of paste shrinkage, creep and ductility are discussed. - Abstract: A numerical model accounting for the effects of neutron irradiation on concrete at the mesoscale is detailed in this paper. Irradiation experiments in test reactor (Elleuch et al., 1972), i.e., in accelerated conditions, are simulated. Concrete is considered as a two-phase material made of elastic inclusions (aggregate) subjected to thermal and irradiation-induced swelling and embedded in a cementitious matrix subjected to shrinkage and thermal expansion. The role of the hardened cement paste in the post-peak regime (brittle-ductile transition with decreasing loading rate), and creep effects are investigated. Radiation-induced volumetric expansion (RIVE) of the aggregate cause the development and propagation of damage around the aggregate which further develops in bridging cracks across the hardened cement paste between the individual aggregate particles. The development of damage is aggravated when shrinkage occurs simultaneously with RIVE during the irradiation experiment. The post-irradiation expansion derived from the simulation is well correlated with the experimental data and, the obtained damage levels are fully consistent with previous estimations based on a micromechanical interpretation of the experimental post-irradiation elastic properties (Le Pape et al., 2015). The proposed modeling opens new perspectives for the interpretation of test reactor experiments in regards to the actual operation of light water reactors.

  6. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  7. The space-scale cube : An integrated model for 2D polygonal areas and scale

    NARCIS (Netherlands)

    Meijers, B.M.; Van Oosterom, P.J.M.

    2011-01-01

    This paper introduces the concept of a space-scale partition, which we term the space-scale cube – analogous with the space-time cube (first introduced by Hägerstrand, 1970). We take the view of ‘map generalization is extrusion of 2D data into the third dimension’ (as introduced by Vermeij et al.,

  8. Pretest Round Robin Analysis of 1:4-Scale Prestressed Concrete Containment Vessel Model

    Energy Technology Data Exchange (ETDEWEB)

    HESSHEIMER,MICHAEL F.; LUK,VINCENT K.; KLAMERUS,ERIC W.; SHIBATA,S.; MITSUGI,S.; COSTELLO,J.F.

    2000-12-18

    The purpose of the program is to investigate the response of representative scale models of nuclear containment to pressure loading beyond the design basis accident and to compare analytical predictions to measured behavior. This objective is accomplished by conducting static, pneumatic overpressurization tests of scale models at ambient temperature. This research program consists of testing two scale models: a steel containment vessel (SCV) model (tested in 1996) and a prestressed concrete containment vessel (PCCV) model, which is the subject of this paper.

  9. A Unified Multi-scale Model for Cross-Scale Evaluation and Integration of Hydrological and Biogeochemical Processes

    Science.gov (United States)

    Liu, C.; Yang, X.; Bailey, V. L.; Bond-Lamberty, B. P.; Hinkle, C.

    2013-12-01

    Mathematical representations of hydrological and biogeochemical processes in soil, plant, aquatic, and atmospheric systems vary with scale. Process-rich models are typically used to describe hydrological and biogeochemical processes at the pore and small scales, while empirical, correlation approaches are often used at the watershed and regional scales. A major challenge for multi-scale modeling is that water flow, biogeochemical processes, and reactive transport are described using different physical laws and/or expressions at the different scales. For example, the flow is governed by the Navier-Stokes equations at the pore-scale in soils, by the Darcy law in soil columns and aquifer, and by the Navier-Stokes equations again in open water bodies (ponds, lake, river) and atmosphere surface layer. This research explores whether the physical laws at the different scales and in different physical domains can be unified to form a unified multi-scale model (UMSM) to systematically investigate the cross-scale, cross-domain behavior of fundamental processes at different scales. This presentation will discuss our research on the concept, mathematical equations, and numerical execution of the UMSM. Three-dimensional, multi-scale hydrological processes at the Disney Wilderness Preservation (DWP) site, Florida will be used as an example for demonstrating the application of the UMSM. In this research, the UMSM was used to simulate hydrological processes in rooting zones at the pore and small scales including water migration in soils under saturated and unsaturated conditions, root-induced hydrological redistribution, and role of rooting zone biogeochemical properties (e.g., root exudates and microbial mucilage) on water storage and wetting/draining. The small scale simulation results were used to estimate effective water retention properties in soil columns that were superimposed on the bulk soil water retention properties at the DWP site. The UMSM parameterized from smaller

  10. Model-Scale Experiment of the Seakeeping Performance for R/V Melville, Model 5720

    Science.gov (United States)

    2012-07-01

    fiberglass with stainless steel bilge keels. A summary of model particulars, in full and model scale, is provided in Table 1. The hull geometry was...foam. The bilge keels were constructed of stainless steel and fit to match the bilge keel trace from the ship drawings (Figure 6). A weight post...Measuring Devices,” NIST Handbook 44, Tina Butcher, Steve Cook, Linda Crown , and Rick Harshman (Editors), National Institute of Standards and

  11. Forest processes from stands to landscapes: exploring model forecast uncertainties using cross-scale model comparison

    Science.gov (United States)

    Michael J. Papaik; Andrew Fall; Brian Sturtevant; Daniel Kneeshaw; Christian Messier; Marie-Josee Fortin; Neal. Simon

    2010-01-01

    Forest management practices conducted primarily at the stand scale result in simplified forests with regeneration problems and low structural and biological diversity. Landscape models have been used to help design management strategies to address these problems. However, there remains a great deal of uncertainty that the actual management practices result in the...

  12. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  13. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  14. Hydrogeologic Framework Model for the Saturated Zone Site Scale flow and Transport Model

    Energy Technology Data Exchange (ETDEWEB)

    T. Miller

    2004-11-15

    The purpose of this report is to document the 19-unit, hydrogeologic framework model (19-layer version, output of this report) (HFM-19) with regard to input data, modeling methods, assumptions, uncertainties, limitations, and validation of the model results in accordance with AP-SIII.10Q, Models. The HFM-19 is developed as a conceptual model of the geometric extent of the hydrogeologic units at Yucca Mountain and is intended specifically for use in the development of the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]). Primary inputs to this model report include the GFM 3.1 (DTN: MO9901MWDGFM31.000 [DIRS 103769]), borehole lithologic logs, geologic maps, geologic cross sections, water level data, topographic information, and geophysical data as discussed in Section 4.1. Figure 1-1 shows the information flow among all of the saturated zone (SZ) reports and the relationship of this conceptual model in that flow. The HFM-19 is a three-dimensional (3-D) representation of the hydrogeologic units surrounding the location of the Yucca Mountain geologic repository for spent nuclear fuel and high-level radioactive waste. The HFM-19 represents the hydrogeologic setting for the Yucca Mountain area that covers about 1,350 km2 and includes a saturated thickness of about 2.75 km. The boundaries of the conceptual model were primarily chosen to be coincident with grid cells in the Death Valley regional groundwater flow model (DTN: GS960808312144.003 [DIRS 105121]) such that the base of the site-scale SZ flow model is consistent with the base of the regional model (2,750 meters below a smoothed version of the potentiometric surface), encompasses the exploratory boreholes, and provides a framework over the area of interest for groundwater flow and radionuclide transport modeling. In depth, the model domain extends from land surface to the base of the regional groundwater flow model (D'Agnese et al. 1997 [DIRS 100131], p 2). For the site-scale

  15. Viscoelastic Model for Lung Parenchyma for Multi-Scale Modeling of Respiratory System, Phase II: Dodecahedral Micro-Model

    Energy Technology Data Exchange (ETDEWEB)

    Freed, Alan D.; Einstein, Daniel R.; Carson, James P.; Jacob, Rick E.

    2012-03-01

    In the first year of this contractual effort a hypo-elastic constitutive model was developed and shown to have great potential in modeling the elastic response of parenchyma. This model resides at the macroscopic level of the continuum. In this, the second year of our support, an isotropic dodecahedron is employed as an alveolar model. This is a microscopic model for parenchyma. A hopeful outcome is that the linkage between these two scales of modeling will be a source of insight and inspiration that will aid us in the final year's activity: creating a viscoelastic model for parenchyma.

  16. Common problematic aspects of coupling hydrological models with groundwater flow models on the river catchment scale

    Directory of Open Access Journals (Sweden)

    R. Barthel

    2006-01-01

    Full Text Available Model coupling requires a thorough conceptualisation of the coupling strategy, including an exact definition of the individual model domains, the "transboundary" processes and the exchange parameters. It is shown here that in the case of coupling groundwater flow and hydrological models – in particular on the regional scale – it is very important to find a common definition and scale-appropriate process description of groundwater recharge and baseflow (or "groundwater runoff/discharge" in order to achieve a meaningful representation of the processes that link the unsaturated and saturated zones and the river network. As such, integration by means of coupling established disciplinary models is problematic given that in such models, processes are defined from a purpose-oriented, disciplinary perspective and are therefore not necessarily consistent with definitions of the same process in the model concepts of other disciplines. This article contains a general introduction to the requirements and challenges of model coupling in Integrated Water Resources Management including a definition of the most relevant technical terms, a short description of the commonly used approach of model coupling and finally a detailed consideration of the role of groundwater recharge and baseflow in coupling groundwater models with hydrological models. The conclusions summarize the most relevant problems rather than giving practical solutions. This paper aims to point out that working on a large scale in an integrated context requires rethinking traditional disciplinary workflows and encouraging communication between the different disciplines involved. It is worth noting that the aspects discussed here are mainly viewed from a groundwater perspective, which reflects the author's background.

  17. Scour around Support Structures of Scaled Model Marine Hydrokinetic Devices

    Science.gov (United States)

    Volpe, M. A.; Beninati, M. L.; Krane, M.; Fontaine, A.

    2013-12-01

    Experiments are presented to explore scour due to flows around support structures of marine hydrokinetic (MHK) devices. Three related studies were performed to understand how submergence, scour condition, and the presence of an MHK device impact scour around the support structure (cylinder). The first study focuses on clear-water scour conditions for a cylinder of varying submergence: surface-piercing and fully submerged. The second study centers on three separate scour conditions (clear-water, transitional and live-bed) around the fully submerged cylinder. Lastly, the third study emphasizes the impact of an MHK turbine on scour around the support structure, in live-bed conditions. Small-scale laboratory testing of model devices can be used to help predict the behavior of MHK devices at full-scale. Extensive studies have been performed on single cylinders, modeling bridge piers, though few have focused on fully submerged structures. Many of the devices being used to harness marine hydrokinetic energy are fully submerged in the flow. Additionally, scour hole dimensions and scour rates have not been addressed. Thus, these three studies address the effect of structure blockage/drag, and the ambient scour conditions on scour around the support structure. The experiments were performed in the small-scale testing platform in the hydraulic flume facility (9.8 m long, 1.2 m wide and 0.4 m deep) at Bucknell University. The support structure diameter (D = 2.54 cm) was held constant for all tests. The submerged cylinder (l/D = 5) and sediment size (d50 = 790 microns) were held constant for all three studies. The MHK device (Dturbine = 10.2 cm) is a two-bladed horizontal axis turbine and the rotating shaft is friction-loaded using a metal brush motor. For each study, bed form topology was measured after a three-hour time interval using a traversing two-dimensional bed profiler. During the experiments, scour hole depth measurements at the front face of the support structure

  18. Relevance of multiple spatial scales in habitat models: A case study with amphibians and grasshoppers

    Science.gov (United States)

    Altmoos, Michael; Henle, Klaus

    2010-11-01

    Habitat models for animal species are important tools in conservation planning. We assessed the need to consider several scales in a case study for three amphibian and two grasshopper species in the post-mining landscapes near Leipzig (Germany). The two species groups were selected because habitat analyses for grasshoppers are usually conducted on one scale only whereas amphibians are thought to depend on more than one spatial scale. First, we analysed how the preference to single habitat variables changed across nested scales. Most environmental variables were only significant for a habitat model on one or two scales, with the smallest scale being particularly important. On larger scales, other variables became significant, which cannot be recognized on lower scales. Similar preferences across scales occurred in only 13 out of 79 cases and in 3 out of 79 cases the preference and avoidance for the same variable were even reversed among scales. Second, we developed habitat models by using a logistic regression on every scale and for all combinations of scales and analysed how the quality of habitat models changed with the scales considered. To achieve a sufficient accuracy of the habitat models with a minimum number of variables, at least two scales were required for all species except for Bufo viridis, for which a single scale, the microscale, was sufficient. Only for the European tree frog ( Hyla arborea), at least three scales were required. The results indicate that the quality of habitat models increases with the number of surveyed variables and with the number of scales, but costs increase too. Searching for simplifications in multi-scaled habitat models, we suggest that 2 or 3 scales should be a suitable trade-off, when attempting to define a suitable microscale.

  19. NASA Standard for Models and Simulations: Credibility Assessment Scale

    Science.gov (United States)

    Babula, Maria; Bertch, William J.; Green, Lawrence L.; Hale, Joseph P.; Mosier, Gary E.; Steele, Martin J.; Woods, Jody

    2009-01-01

    As one of its many responses to the 2003 Space Shuttle Columbia accident, NASA decided to develop a formal standard for models and simulations (M&S). Work commenced in May 2005. An interim version was issued in late 2006. This interim version underwent considerable revision following an extensive Agency-wide review in 2007 along with some additional revisions as a result of the review by the NASA Engineering Management Board (EMB) in the first half of 2008. Issuance of the revised, permanent version, hereafter referred to as the M&S Standard or just the Standard, occurred in July 2008. Bertch, Zang and Steeleiv provided a summary review of the development process of this standard up through the start of the review by the EMB. A thorough recount of the entire development process, major issues, key decisions, and all review processes are available in Ref. v. This is the second of a pair of papers providing a summary of the final version of the Standard. Its focus is the Credibility Assessment Scale, a key feature of the Standard, including an example of its application to a real-world M&S problem for the James Webb Space Telescope. The companion paper summarizes the overall philosophy of the Standard and an overview of the requirements. Verbatim quotes from the Standard are integrated into the text of this paper, and are indicated by quotation marks.

  20. Implementation of meso-scale radioactive dispersion model for GPU

    Energy Technology Data Exchange (ETDEWEB)

    Sunarko [National Nuclear Energy Agency of Indonesia (BATAN), Jakarta (Indonesia). Nuclear Energy Assessment Center; Suud, Zaki [Bandung Institute of Technology (ITB), Bandung (Indonesia). Physics Dept.

    2017-05-15

    Lagrangian Particle Dispersion Method (LPDM) is applied to model atmospheric dispersion of radioactive material in a meso-scale of a few tens of kilometers for site study purpose. Empirical relationships are used to determine the dispersion coefficient for various atmospheric stabilities. Diagnostic 3-D wind-field is solved based on data from one meteorological station using mass-conservation principle. Particles representing radioactive pollutant are dispersed in the wind-field as a point source. Time-integrated air concentration is calculated using kernel density estimator (KDE) in the lowest layer of the atmosphere. Parallel code is developed for GTX-660Ti GPU with a total of 1 344 scalar processors using CUDA. A test of 1-hour release discovers that linear speedup is achieved starting at 28 800 particles-per-hour (pph) up to about 20 x at 14 4000 pph. Another test simulating 6-hour release with 36 000 pph resulted in a speedup of about 60 x. Statistical analysis reveals that resulting grid doses are nearly identical in both CPU and GPU versions of the code.

  1. Overview of the Ares I Scale Model Acoustic Test Program

    Science.gov (United States)

    Counter, Douglas D.; Houston, Janice D.

    2011-01-01

    Launch environments, such as lift-off acoustic (LOA) and ignition overpressure (IOP), are important design factors for any vehicle and are dependent upon the design of both the vehicle and the ground systems. LOA environments are used directly in the development of vehicle vibro-acoustic environments and IOP is used in the loads assessment. The NASA Constellation Program had several risks to the development of the Ares I vehicle linked to LOA. The risks included cost, schedule and technical impacts for component qualification due to high predicted vibro-acoustic environments. One solution is to mitigate the environment at the component level. However, where the environment is too severe for component survivability, reduction of the environment itself is required. The Ares I Scale Model Acoustic Test (ASMAT) program was implemented to verify the Ares I LOA and IOP environments for the vehicle and ground systems including the Mobile Launcher (ML) and tower. An additional objective was to determine the acoustic reduction for the LOA environment with an above deck water sound suppression system. ASMAT was a development test performed at the Marshall Space Flight Center (MSFC) East Test Area (ETA) Test Stand 116 (TS 116). The ASMAT program is described in this presentation.

  2. Small scale modelling of dynamic impact of debris flows

    Science.gov (United States)

    Sanvitale, Nicoletta; Bowman, Elisabeth

    2017-04-01

    Fast landslides, such as debris flows, involve high speed downslope motion of rocks, soil and water. Engineering attempts to reduce the risk posed by these natural hazards often involve the placement of barriers or obstacles to inhibit movement. The impact pressures exert by debris flows are difficult to estimate because they not only depend on the geometry and size of the flow and the obstacle but also on the characteristics of the flow mixture. The presence of a solid phase can increase local impact pressure due to hard contact often caused by single boulder. This can lead to higher impact forces compared to the estimates of the peak pressure value obtained from hydraulic based models commonly adopted in such analyses. The proposed study aims at bringing new insight to the impact loading of structures generated by segregating granular debris flow. A small-scale flume, designed to enable plane laser induced fluorescence (PLIF) and digital image correlation (DIC) to be applied internally will be used for 2D analyses. The flow will incorporate glass particles suitable for refractive index matching (RIM) with a matched fluid to gain optical access to the internal behaviour of the flow, via a laser sheet applied away from sidewall boundaries. For these tests, the focus will be on assessing 2D particle interactions in unsteady flow. The paper will present in details the methodology and the set-up of the experiments together with some preliminary results

  3. Scale-adaptive surface modeling of vascular structures

    Directory of Open Access Journals (Sweden)

    Ma Xin

    2010-11-01

    Full Text Available Abstract Background The effective geometric modeling of vascular structures is crucial for diagnosis, therapy planning and medical education. These applications require good balance with respect to surface smoothness, surface accuracy, triangle quality and surface size. Methods Our method first extracts the vascular boundary voxels from the segmentation result, and utilizes these voxels to build a three-dimensional (3D point cloud whose normal vectors are estimated via covariance analysis. Then a 3D implicit indicator function is computed from the oriented 3D point cloud by solving a Poisson equation. Finally the vessel surface is generated by a proposed adaptive polygonization algorithm for explicit 3D visualization. Results Experiments carried out on several typical vascular structures demonstrate that the presented method yields both a smooth morphologically correct and a topologically preserved two-manifold surface, which is scale-adaptive to the local curvature of the surface. Furthermore, the presented method produces fewer and better-shaped triangles with satisfactory surface quality and accuracy. Conclusions Compared to other state-of-the-art approaches, our method reaches good balance in terms of smoothness, accuracy, triangle quality and surface size. The vessel surfaces produced by our method are suitable for applications such as computational fluid dynamics simulations and real-time virtual interventional surgery.

  4. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  5. Proposing an Educational Scaling-and-Diffusion Model for Inquiry-Based Learning Designs

    Science.gov (United States)

    Hung, David; Lee, Shu-Shing

    2015-01-01

    Education cannot adopt the linear model of scaling used by the medical sciences. "Gold standards" cannot be replicated without considering process-in-learning, diversity, and student-variedness in classrooms. This article proposes a nuanced model of educational scaling-and-diffusion, describing the scaling (top-down supports) and…

  6. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  7. Large-scale secondary circulations in the regional climate model COSMO-CLM

    OpenAIRE

    Becker, Nico

    2016-01-01

    Regional climate models (RCMs) are used to add smaller scales to coarser resolved driving data, e. g. from global climate models (GCMs), by using a higher resolution on a limited domain. However, RCMs do not only add scales which are not resolved by the driving model but also deviate from the driving data on larger scales. Thus, RCMs are able to improve the large scales prescribed by the driving data. However, large scale deviations can also lead to instabilities at the model boundaries. A sy...

  8. Linear Inverse Modeling and Scaling Analysis of Drainage Inventories.

    Science.gov (United States)

    O'Malley, C.; White, N. J.

    2016-12-01

    constants can be shown to produce reliable uplift histories. However, these erosional constants appear to vary from continent to continent. Future work will investigate the global relationship between our inversion results, scaling laws, climate models, lithological variation and sedimentary flux.

  9. A numerical model for dynamic crustal-scale fluid flow

    Science.gov (United States)

    Sachau, Till; Bons, Paul; Gomez-Rivas, Enrique; Koehn, Daniel

    2015-04-01

    Fluid flow in the crust is often envisaged and modeled as continuous, yet minimal flow, which occurs over large geological times. This is a suitable approximation for flow as long as it is solely controlled by the matrix permeability of rocks, which in turn is controlled by viscous compaction of the pore space. However, strong evidence (hydrothermal veins and ore deposits) exists that a significant part of fluid flow in the crust occurs strongly localized in both space and time, controlled by the opening and sealing of hydrofractures. We developed, tested and applied a novel computer code, which considers this dynamic behavior and couples it with steady, Darcian flow controlled by the matrix permeability. In this dual-porosity model, fractures open depending on the fluid pressure relative to the solid pressure. Fractures form when matrix permeability is insufficient to accommodate fluid flow resulting from compaction, decompression (Staude et al. 2009) or metamorphic dehydration reactions (Weisheit et al. 2013). Open fractures can close when the contained fluid either seeps into the matrix or escapes by fracture propagation: mobile hydrofractures (Bons, 2001). In the model, closing and sealing of fractures is controlled by a time-dependent viscous law, which is based on the effective stress and on either Newtonian or non-Newtonian viscosity. Our simulations indicate that the bulk of crustal fluid flow in the middle to lower upper crust is intermittent, highly self-organized, and occurs as mobile hydrofractures. This is due to the low matrix porosity and permeability, combined with a low matrix viscosity and, hence, fast sealing of fractures. Stable fracture networks, generated by fluid overpressure, are restricted to the uppermost crust. Semi-stable fracture networks can develop in an intermediate zone, if a critical overpressure is reached. Flow rates in mobile hydrofractures exceed those in the matrix porosity and fracture networks by orders of magnitude

  10. Mokken Scale Analysis for Dichotomous Items Using Marginal Models

    Science.gov (United States)

    van der Ark, L. Andries; Croon, Marcel A.; Sijtsma, Klaas

    2008-01-01

    Scalability coefficients play an important role in Mokken scale analysis. For a set of items, scalability coefficients have been defined for each pair of items, for each individual item, and for the entire scale. Hypothesis testing with respect to these scalability coefficients has not been fully developed. This study introduces marginal modelling…

  11. Strategies for Measuring Wind Erosion for Regional Scale Modeling

    NARCIS (Netherlands)

    Youssef, F.; Visser, S.; Karssenberg, D.J.; Slingerland, E.; Erpul, G.; Ziadat, F.; Stroosnijder, L. Prof.dr.ir.

    2012-01-01

    Windblown sediment transport is mostly measured at field or plot scale due to the high spatial variability over the study area. Regional scale measurements are often limited to measurements of the change in the elevation providing information on net erosion or deposition. For the calibration and

  12. Ares I Scale Model Acoustic Test Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas

    2011-01-01

    Ares I Scale Model Acoustic Test (ASMAT) is a 5% scale model test of the Ares I vehicle, launch pad and support structures conducted at MSFC to verify acoustic and ignition environments and evaluate water suppression systems Test design considerations 5% measurements must be scaled to full scale requiring high frequency measurements Users had different frequencies of interest Acoustics: 200 - 2,000 Hz full scale equals 4,000 - 40,000 Hz model scale Ignition Transient: 0 - 100 Hz full scale equals 0 - 2,000 Hz model scale Environment exposure Weather exposure: heat, humidity, thunderstorms, rain, cold and snow Test environments: Plume impingement heat and pressure, and water deluge impingement Several types of sensors were used to measure the environments Different instrument mounts were used according to the location and exposure to the environment This presentation addresses the observed effects of the selected sensors and mount design on the acoustic and pressure measurements

  13. Drift-Scale Coupled Processes (DST and TH Seepage) Models

    Energy Technology Data Exchange (ETDEWEB)

    J. Birkholzer; S. Mukhopadhyay

    2004-09-29

    The purpose of this report is to document drift-scale modeling work performed to evaluate the thermal-hydrological (TH) behavior in Yucca Mountain fractured rock close to waste emplacement drifts. The heat generated by the decay of radioactive waste results in rock temperatures elevated from ambient for thousands of years after emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, giving rise to water redistribution and altered flow paths. The predictive simulations described in this report are intended to investigate fluid flow in the vicinity of an emplacement drift for a range of thermal loads. Understanding the TH coupled processes is important for the performance of the repository because the thermally driven water saturation changes affect the potential seepage of water into waste emplacement drifts. Seepage of water is important because if enough water gets into the emplacement drifts and comes into contact with any exposed radionuclides, it may then be possible for the radionuclides to be transported out of the drifts and to the groundwater below the drifts. For above-boiling rock temperatures, vaporization of percolating water in the fractured rock overlying the repository can provide an important barrier capability that greatly reduces (and possibly eliminates) the potential of water seeping into the emplacement drifts. In addition to this thermal process, water is inhibited from entering the drift opening by capillary forces, which occur under both ambient and thermal conditions (capillary barrier). The combined barrier capability of vaporization processes and capillary forces in the near-field rock during the thermal period of the repository is analyzed and discussed in this report.

  14. Scaling up from field to region for wind erosion prediction using a field-scale wind erosion model and GIS

    Science.gov (United States)

    Zobeck, T.M.; Parker, N.C.; Haskell, S.; Guoding, K.

    2000-01-01

    Factors that affect wind erosion such as surface vegetative and other cover, soil properties and surface roughness usually change spatially and temporally at the field-scale to produce important field-scale variations in wind erosion. Accurate estimation of wind erosion when scaling up from fields to regions, while maintaining meaningful field-scale process details, remains a challenge. The objectives of this study were to evaluate the feasibility of using a field-scale wind erosion model with a geographic information system (GIS) to scale up to regional levels and to quantify the differences in wind erosion estimates produced by different scales of soil mapping used as a data layer in the model. A GIS was used in combination with the revised wind erosion equation (RWEQ), a field-scale wind erosion model, to estimate wind erosion for two 50 km2 areas. Landsat Thematic Mapper satellite imagery from 1993 with 30 m resolution was used as a base map. The GIS database layers included land use, soils, and other features such as roads. The major land use was agricultural fields. Data on 1993 crop management for selected fields of each crop type were collected from local government agency offices and used to 'train' the computer to classify land areas by crop and type of irrigation (agroecosystem) using commercially available software. The land area of the agricultural land uses was overestimated by 6.5% in one region (Lubbock County, TX, USA) and underestimated by about 21% in an adjacent region (Terry County, TX, USA). The total estimated wind erosion potential for Terry County was about four times that estimated for adjacent Lubbock County. The difference in potential erosion among the counties was attributed to regional differences in surface soil texture. In a comparison of different soil map scales in Terry County, the generalised soil map had over 20% more of the land area and over 15% greater erosion potential in loamy sand soils than did the detailed soil map. As

  15. Training Systems Modelers through the Development of a Multi-scale Chagas Disease Risk Model

    Science.gov (United States)

    Hanley, J.; Stevens-Goodnight, S.; Kulkarni, S.; Bustamante, D.; Fytilis, N.; Goff, P.; Monroy, C.; Morrissey, L. A.; Orantes, L.; Stevens, L.; Dorn, P.; Lucero, D.; Rios, J.; Rizzo, D. M.

    2012-12-01

    The goal of our NSF-sponsored Division of Behavioral and Cognitive Sciences grant is to create a multidisciplinary approach to develop spatially explicit models of vector-borne disease risk using Chagas disease as our model. Chagas disease is a parasitic disease endemic to Latin America that afflicts an estimated 10 million people. The causative agent (Trypanosoma cruzi) is most commonly transmitted to humans by blood feeding triatomine insect vectors. Our objectives are: (1) advance knowledge on the multiple interacting factors affecting the transmission of Chagas disease, and (2) provide next generation genomic and spatial analysis tools applicable to the study of other vector-borne diseases worldwide. This funding is a collaborative effort between the RSENR (UVM), the School of Engineering (UVM), the Department of Biology (UVM), the Department of Biological Sciences (Loyola (New Orleans)) and the Laboratory of Applied Entomology and Parasitology (Universidad de San Carlos). Throughout this five-year study, multi-educational groups (i.e., high school, undergraduate, graduate, and postdoctoral) will be trained in systems modeling. This systems approach challenges students to incorporate environmental, social, and economic as well as technical aspects and enables modelers to simulate and visualize topics that would either be too expensive, complex or difficult to study directly (Yasar and Landau 2003). We launch this research by developing a set of multi-scale, epidemiological models of Chagas disease risk using STELLA® software v.9.1.3 (isee systems, inc., Lebanon, NH). We use this particular system dynamics software as a starting point because of its simple graphical user interface (e.g., behavior-over-time graphs, stock/flow diagrams, and causal loops). To date, high school and undergraduate students have created a set of multi-scale (i.e., homestead, village, and regional) disease models. Modeling the system at multiple spatial scales forces recognition that

  16. Confined swirling jet predictions using a multiple-scale turbulence model

    Science.gov (United States)

    Chen, C. P.

    1985-01-01

    A recently developed multiple scale turbulence model is used for the numerical prediction of isothermal, confined turbulent swirling flows. Because of the streamline curvature and nonequilibrium spectral energy transfer nature of the swirling flow, the utilized multiple scale turbulence model includes a different set of response equations for each of the large scale energetic eddies and the small scale transfer eddies. Predictions are made of a confined coaxial swirling jet in a sudden expansion and comparisons are made with experimental data and with the conventional single scale two equation model. The multiple scale model shows significant improvement of predictions of swirling flows over the single scale k epsilon model. The sensitivity study of the effect of prescribed inlet turbulence levels on the flow fields is also included.

  17. Reduced Fracture Finite Element Model Analysis of an Efficient Two-Scale Hybrid Embedded Fracture Model

    KAUST Repository

    Amir, Sahar Z.

    2017-06-09

    A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.

  18. Spatially distributed modelling of pesticide leaching at European scale with the PyCatch modelling framework

    Science.gov (United States)

    Schmitz, Oliver; van der Perk, Marcel; Karssenberg, Derek; Häring, Tim; Jene, Bernhard

    2017-04-01

    The modelling of pesticide transport through the soil and estimating its leaching to groundwater is essential for an appropriate environmental risk assessment. Pesticide leaching models commonly used in regulatory processes often lack the capability of providing a comprehensive spatial view, as they are implemented as non-spatial point models or only use a few combinations of representative soils to simulate specific plots. Furthermore, their handling of spatial input and output data and interaction with available Geographical Information Systems tools is limited. Therefore, executing several scenarios simulating and assessing the potential leaching on national or continental scale at high resolution is rather inefficient and prohibits the straightforward identification of areas prone to leaching. We present a new pesticide leaching model component of the PyCatch framework developed in PCRaster Python, an environmental modelling framework tailored to the development of spatio-temporal models (http://www.pcraster.eu). To ensure a feasible computational runtime of large scale models, we implemented an elementary field capacity approach to model soil water. Currently implemented processes are evapotranspiration, advection, dispersion, sorption, degradation and metabolite transformation. Not yet implemented relevant additional processes such as surface runoff, snowmelt, erosion or other lateral flows can be integrated with components already implemented in PyCatch. A preliminary version of the model executes a 20-year simulation of soil water processes for Germany (20 soil layers, 1 km2 spatial resolution, and daily timestep) within half a day using a single CPU. A comparison of the soil moisture and outflow obtained from the PCRaster implementation and PELMO, a commonly used pesticide leaching model, resulted in an R2 of 0.98 for the FOCUS Hamburg scenario. We will further discuss the validation of the pesticide transport processes and show case studies applied to

  19. Using local scale 222Rn data to calibrate large scale SGD numerical modeling along the Alabama coastline

    Science.gov (United States)

    Dimova, N. T.

    2016-02-01

    Current Earth System Models (ESM) do not include groundwater as a transport mechanism of land-born constituent to the ocean. However, coastal hydrogeological studies from the last two decades indicate that significant material fluxes have been transported from land to the continental shelf via submarine groundwater discharge (SGD). Constructing realistic large-scale models to assess water and constituent fluxes to coastal areas is fundamental. This paper demonstrates how an independent tracer groundwater tracer approach (based on 222Rn) applied to small scale aquifer system can be used to improve the precision of a larger scale numerical model along the Alabama coastline. Presented here is a case study from the Alabama coastline in the northern Gulf of Mexico (GOM). A simple field technique was used to obtain groundwater seepage (2.4 cm/day) to a small near shore lake, representative to the shallow coastal aquifer. These data were then converted in site-specific hydraulic conductivity (23 m/day) using Darcy's Law and further incorporated into a numerical regional groundwater flow model (MODFLOW/SEAWAT) to improve total SGD flow estimates to GOM. Given the growing awareness of the importance of SGD for material fluxes into the ocean, better calibrations of the regional scale models is critical for realistic forecasts on the potential impacts of climate change and anthropogenic activities.

  20. Scale effect challenges in urban hydrology highlighted with a distributed hydrological model

    Directory of Open Access Journals (Sweden)

    A. Ichiba

    2018-01-01

    Full Text Available Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration by innovative methods of model resolution alteration based on the spatial data variability and scaling of flows in urban hydrology.

  1. Regional scale ecological risk assessment: using the relative risk model

    National Research Council Canada - National Science Library

    Landis, Wayne G

    2005-01-01

    ...) in the performance of regional-scale ecological risk assessments. The initial chapters present the methodology and the critical nature of the interaction between risk assessors and decision makers...

  2. A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model

    Science.gov (United States)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data

  3. Multiphysics pore-scale model for the rehydration of porous foods

    NARCIS (Netherlands)

    Sman, van der R.G.M.; Vergeldt, F.J.; As, van H.; Dalen, van G.; Voda, A.; Duynhoven, van J.P.M.

    2014-01-01

    In this paper we present a pore-scale model describing the multiphysics occurring during the rehydration of freeze-dried vegetables. This pore-scale model is part of a multiscale simulation model, which should explain the effect of microstructure and pre-treatments on the rehydration rate.

  4. Ares I Scale Model Acoustic Test Liftoff Acoustic Results and Comparisons

    Science.gov (United States)

    Counter, Doug; Houston, Janice

    2011-01-01

    Conclusions: Ares I-X flight data validated the ASMAT LOA results. Ares I Liftoff acoustic environments were verified with scale model test results. Results showed that data book environments were under-conservative for Frustum (Zone 5). Recommendations: Data book environments can be updated with scale model test and flight data. Subscale acoustic model testing useful for future vehicle environment assessments.

  5. Ecosystem Demography Model: Scaling Vegetation Dynamics Across South America

    Data.gov (United States)

    National Aeronautics and Space Administration — This model product contains the source code for the Ecosystem Demography Model (ED version 1.0) as well as model input and output data for a portion of South America...

  6. Ecosystem Demography Model: Scaling Vegetation Dynamics Across South America

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This model product contains the source code for the Ecosystem Demography Model (ED version 1.0) as well as model input and output data for a portion of...

  7. Scaling Surface Fluxes from Tower Footprint to Global Model Pixel Scale Using Multi-Satellite Data Fusion

    Science.gov (United States)

    Anderson, M. C.; Hain, C.; Gao, F.; Semmens, K. A.; Yang, Y.; Schull, M. A.; Ring, T.; Kustas, W. P.; Alfieri, J. G.

    2014-12-01

    There is a fundamental challenge in evaluating performance of global land-surface and climate modeling systems, given that few in-situ observation sets adequately sample surface conditions representative at the global model pixel scale (10-100km). For example, a typical micrometeorological flux tower samples a relatively small footprint ranging from 100m to 1km, depending on tower height and environmental conditions. There is a clear need for diagnostic tools that can effectively bridge this gap in scale, and serve as a means of benchmarking global prognostic modeling systems under current conditions. This paper discusses a multi-scale energy balance modeling system (the Atmosphere-Land Exchange Inverse model and disaggregation utility: ALEXI/DisALEXI) that fuses flux maps generated with thermal infrared (TIR) imagery collected by multiple satellite platforms to estimate daily surface fluxes from field to global scales. These diagnostic assessments, with land-surface temperature (LST) as the primary indicator of surface moisture status, operate under fundamentally different constraints than prognostic land-surface models based on precipitation and water balance, and therefore can serve as a semi-independent benchmark. Furthermore, LST can be retrieved from TIR imagery over a broad range of spatiotemporal resolutions: from several meters (airborne systems; periodically) to ~100m (Landsat; bi-weekly) to 1km (Moderate Resolution Imaging Spectroradiometer - MODIS; daily) to 3-10km (geostationary; hourly). Applications of ALEXI/DisALEXI to flux sites within the US and internationally are described, evaluating daily evapotranspiration retrievals generated at 30m resolution. Annual timeseries of maps at this scale can be useful for better understanding local heterogeneity in the tower vicinity and dependences of observed fluxes on wind direction. If reasonable multi-year performance is demonstrated at the tower footprint scale for flux networks such as the National

  8. Multi-Physics and Multi-Scale Deterioration Modelling of Reinforced Concrete

    DEFF Research Database (Denmark)

    Michel, Alexander; Stang, Henrik; Lepech, M.

    2016-01-01

    , methods and tools for modelling decades-long deterioration and maintenance are much less developed. In this paper, a multi-physics and multi-scale modelling approach for structural deterioration of reinforced concrete components due to reinforcement corrosion is presented. The multi-disciplinary modelling...... approach includes physical, chemical, electrochemical, and fracture mechanical processes at the material and meso-scale, which are further coupled with mechanical deterioration processes at the structural scale....

  9. Evaluation of a plot-scale methane emission model using eddy covariance observations and footprint modelling

    Directory of Open Access Journals (Sweden)

    A. Budishchev

    2014-09-01

    Full Text Available Most plot-scale methane emission models – of which many have been developed in the recent past – are validated using data collected with the closed-chamber technique. This method, however, suffers from a low spatial representativeness and a poor temporal resolution. Also, during a chamber-flux measurement the air within a chamber is separated from the ambient atmosphere, which negates the influence of wind on emissions. Additionally, some methane models are validated by upscaling fluxes based on the area-weighted averages of modelled fluxes, and by comparing those to the eddy covariance (EC flux. This technique is rather inaccurate, as the area of upscaling might be different from the EC tower footprint, therefore introducing significant mismatch. In this study, we present an approach to validate plot-scale methane models with EC observations using the footprint-weighted average method. Our results show that the fluxes obtained by the footprint-weighted average method are of the same magnitude as the EC flux. More importantly, the temporal dynamics of the EC flux on a daily timescale are also captured (r2 = 0.7. In contrast, using the area-weighted average method yielded a low (r2 = 0.14 correlation with the EC measurements. This shows that the footprint-weighted average method is preferable when validating methane emission models with EC fluxes for areas with a heterogeneous and irregular vegetation pattern.

  10. Pore-to-Darcy Scale Hybrid Multiscale Finite Volume Model for Reactive Flow and Transport

    Science.gov (United States)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2016-12-01

    In the present work we develop a hybrid scheme for the coupling and temporal integration of grid-based, continuum models for pore-scale and Darcy-scale flow and reactive transport. The hybrid coupling strategy consists on applying Darcy-scale and pore-scale flow and reactive transport models over overlapping subdomains Ω C and Ω F, and enforcing continuity of state and fluxes by means of restriction and prolongation operations defined over the overlap subdomain Ω hs ≡ Ω C \\cap Ω F. For the pore-scale model, we use a Multiscale Finite Volume (MsFV) characterization of the pore-scale state in terms of Darcy-scale degrees of freedom and local functions defined as the solution of pore-scale problems. The hybrid MsFV coupling results in a local-global combination of effective mass balance relations for the Darcy-scale degrees of freedom and local problems for the pore-scale degrees of freedom that capture pore-scale behavior. Our scheme allows for the rapid coarsening of pore-scale models and the adaptive enrichment of Darcy-scale models with pore-scale information. Additionally, we propose a strategy for modeling the dynamics of the pore-scale solid-liquid boundary due to precipitation and dissolution phenomena, based on the Diffuse Domain method (DDM), which is incorporated into the MsFV approximation of pore-scale states. We apply the proposed hybrid scheme to a reactive flow and transport problem in porous media subject to heterogeneous reactions and the corresponding precipitation and dissolution phenomena.

  11. Prediction of Mineral Scale Formation in Geothermal and Oilfield Operations using the Extended UNIQUAC Model. Part I: Sulphate Scaling Minerals

    DEFF Research Database (Denmark)

    Garcia, Ada V.; Thomsen, Kaj; Stenby, Erling Halfdan

    2005-01-01

    Pressure parameters are added to the Extended UNIQUAC model presented by Thomsen and Rasmussen (1999). The improved model has been used for correlation and prediction of solid-liquid equilibrium (SLE) of scaling minerals (CaSO4, CaSO4·2H2O, BaSO4 and SrSO4) at temperatures up to 300°C and pressur...

  12. Modeling heat efficiency, flow and scale-up in the corotating disc scraped surface heat exchanger

    DEFF Research Database (Denmark)

    Friis, Alan; Szabo, Peter; Karlson, Torben

    2002-01-01

    A comparison of two different scale corotating disc scraped surface heat exchangers (CDHE) was performed experimentally. The findings were compared to predictions from a finite element model. We find that the model predicts well the flow pattern of the two CDHE's investigated. The heat transfer...... performance predicted by the model agrees well with experimental observations for the laboratory scale CDHE whereas the overall heat transfer in the scaled-up version was not in equally good agreement. The lack of the model to predict the heat transfer performance in scale-up leads us to identify the key...

  13. Model Reduction Using Multiple Time Scales in Stochastic Gene Regulatory Networks

    National Research Council Canada - National Science Library

    Peles, Slaven; Munsky, Brian; Khammash, Mustafa

    2006-01-01

    .... Multiple time scales in mathematical models often lead to serious computational difficulties, such as numerical stiffness in the case of differential equations or excessively redundant Monte Carlo...

  14. Modeling the spreading of large-scale wildland fires

    Science.gov (United States)

    Mohamed Drissi

    2015-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning...

  15. Modelling the impact of implementing Water Sensitive Urban Design on at a catchment scale

    DEFF Research Database (Denmark)

    Locatelli, Luca; Gabriel, S.; Bockhorn, Britta

    Stormwater management using Water Sensitive Urban Design (WSUD) is expected to be part of future drainage systems. This project aimed to develop a set of hydraulic models of the Harrestrup Å catchment (close to Copenhagen) in order to demonstrate the importance of modeling WSUDs at different scales......, ranging from models of an individual soakaway up to models of a large urban catchment. The models were developed in Mike Urban with a new integrated soakaway model. A small-scale individual soakaway model was used to determine appropriate initial conditions for soakway models. This model was applied...

  16. Approaches for dealing with various sources of overdispersion in modeling count data: Scale adjustment versus modeling.

    Science.gov (United States)

    Payne, Elizabeth H; Hardin, James W; Egede, Leonard E; Ramakrishnan, Viswanathan; Selassie, Anbesaw; Gebregziabher, Mulugeta

    2017-08-01

    Overdispersion is a common problem in count data. It can occur due to extra population-heterogeneity, omission of key predictors, and outliers. Unless properly handled, this can lead to invalid inference. Our goal is to assess the differential performance of methods for dealing with overdispersion from several sources. We considered six different approaches: unadjusted Poisson regression (Poisson), deviance-scale-adjusted Poisson regression (DS-Poisson), Pearson-scale-adjusted Poisson regression (PS-Poisson), negative-binomial regression (NB), and two generalized linear mixed models (GLMM) with random intercept, log-link and Poisson (Poisson-GLMM) and negative-binomial (NB-GLMM) distributions. To rank order the preference of the models, we used Akaike's information criteria/Bayesian information criteria values, standard error, and 95% confidence-interval coverage of the parameter values. To compare these methods, we used simulated count data with overdispersion of different magnitude from three different sources. Mean of the count response was associated with three predictors. Data from two real-case studies are also analyzed. The simulation results showed that NB and NB-GLMM were preferred for dealing with overdispersion resulting from any of the sources we considered. Poisson and DS-Poisson often produced smaller standard-error estimates than expected, while PS-Poisson conversely produced larger standard-error estimates. Thus, it is good practice to compare several model options to determine the best method of modeling count data.

  17. A Three-Dimensional Scale-adaptive Turbulent Kinetic Energy Model in ARW-WRF Model

    Science.gov (United States)

    Zhang, Xu; Bao, Jian-Wen; Chen, Baode

    2017-04-01

    A new three-dimensional (3D) turbulent kinetic energy (TKE) subgrid mixing model is developed to address the problem of simulating the convective boundary layer (CBL) across the terra incognita in the Advanced Research version of the Weather Research and Forecasting Model (ARW-WRF). The new model combines the horizontal and vertical subgrid turbulent mixing into a single energetically consistent framework, in contrast to the convectional one-dimensional (1D) planetary boundary layer (PBL) schemes. The transition between large-eddy simulation (LES) and mesoscale limit is accomplished in the new scale-adaptive model. A series of dry CBL and real-time simulations using the WRF model are carried out, in which the newly-developed, scale-adaptive, more general and energetically consistent TKE-based model is compared with the conventional 1D TKE-based PBL schemes for parameterizing vertical subgrid turbulent mixing against the WRF LES dataset and observations. The characteristics of the WRF-simulated results using the new and conventional schemes are compared. The importance of including the nonlocal component in the vertical buoyancy specification in the newly-developed general TKE-based scheme is illustrated. The improvements of the new scheme over convectional PBL schemes across the terra incognita can be seen in the partitioning of vertical flux profiles. Through comparing the results from the simulations against the WRF LES dataset and observations, we will show the feasibility of using the new scheme in the WRF model in the lieu of the conventional PBL parameterization schemes.

  18. Data assimilation in optimizing and integrating soil and water quality water model predictions at different scales

    Science.gov (United States)

    Relevant data about subsurface water flow and solute transport at relatively large scales that are of interest to the public are inherently laborious and in most cases simply impossible to obtain. Upscaling in which fine-scale models and data are used to predict changes at the coarser scales is the...

  19. Scale invariance implies conformal invariance for the three-dimensional Ising model.

    Science.gov (United States)

    Delamotte, Bertrand; Tissier, Matthieu; Wschebor, Nicolás

    2016-01-01

    Using the Wilson renormalization group, we show that if no integrated vector operator of scaling dimension -1 exists, then scale invariance implies conformal invariance. By using the Lebowitz inequalities, we prove that this necessary condition is fulfilled in all dimensions for the Ising universality class. This shows, in particular, that scale invariance implies conformal invariance for the three-dimensional Ising model.

  20. Field scale heterogeneity of redox conditions in till-upscaling to a catchment nitrate model

    DEFF Research Database (Denmark)

    Hansen, J.R.; Erntsen, V.; Refsgaard, J.C.

    2008-01-01

    Point scale studies in different settings of glacial geology show a large local variation of redox conditions. There is a need to develop an upscaling methodology for catchment scale models. This paper describes a study of field-scale heterogeneity of redox-interfaces in a till aquitard within an...

  1. A feasibility and implementation model of small-scale hydropower ...

    African Journals Online (AJOL)

    2016-10-04

    Oct 4, 2016 ... Several other site selection parameters were used to evaluate the Kwa Madiba potential small-scale hydropower site, which include accessibility by vehicle, current electrical grid con- nection and future electrical grid connectivity, environmental impact and social impact. Okot (2013) evaluates hydropower.

  2. Using Genome-scale Models to Predict Biological Capabilities

    DEFF Research Database (Denmark)

    O’Brien, Edward J.; Monk, Jonathan M.; Palsson, Bernhard O.

    2015-01-01

    Constraint-based reconstruction and analysis (COBRA) methods at the genome scale have been under development since the first whole-genome sequences appeared in the mid-1990s. A few years ago, this approach began to demonstrate the ability to predict a range of cellular functions, including cellular...

  3. Scale-free random graphs and Potts model

    Indian Academy of Sciences (India)

    We introduce a simple algorithm that constructs scale-free random graphs efficiently: each vertex has a prescribed weight − (0 < < 1) and an edge can connect vertices and with rate . Corresponding equilibrium ensemble is identified and the problem is solved by the → 1 limit of the -state Potts ...

  4. Studying Children's Early Literacy Development: Confirmatory Multidimensional Scaling Growth Modeling

    Science.gov (United States)

    Ding, Cody

    2012-01-01

    There has been considerable debate over the ways in which children's early literacy skills develop over time. Using confirmatory multidimensional scaling (MDS) growth analysis, this paper directly tested the hypothesis of a cumulative trajectory versus a compensatory trajectory of development in early literacy skills among a group of 1233…

  5. Astronomical Scale of Stellar Distances Using 3-D Models

    Science.gov (United States)

    Fidler, Chuck; Dotger, Sharon

    2010-01-01

    One of the largest challenges of teaching astronomy is bringing the infinite scale of the universe into the four walls of a classroom. However, concepts of astronomy are often the most interesting to students. This article focuses on an alternative method for learning about stars by exploring visible characteristics of the constellation Orion and…

  6. Modelling of a Small Scale Waste Water Treatment Plant (SSWWTP)

    African Journals Online (AJOL)

    PROF. OLIVER OSUAGWA

    2014-06-01

    Jun 1, 2014 ... source of energy. Future effort will be focus on improving the efficiency of energy used in the waste water [3]. Aim. The aim of this project is to bring into existence a Small Scale Waste Water. Treatment Plant that can convert a waste water with high Chemical Oxygen Demand (COD) and high Biological ...

  7. Random walk models of large-scale structure

    Indian Academy of Sciences (India)

    Abstract. This paper describes the insights gained from the excursion set approach, in which vari- ous questions about the phenomenology of large-scale structure formation can be mapped to problems associated with the first crossing distribution of appropriately defined barriers by random walks. Much of this is ...

  8. A feasibility and implementation model of small-scale hydropower ...

    African Journals Online (AJOL)

    Large numbers of households and communities will not be connected to the national electricity grid for the foreseeable future due to high cost of transmission and distribution systems to remote communities and the relatively low electricity demand within rural communities. Small-scale hydropower used to play a very ...

  9. Metric-Asaurus: Conceptualizing Scale Using Dinosaur Models

    Science.gov (United States)

    Gloyna, Lisa; West, Sandra; Martin, Patti; Browning, Sandra

    2010-01-01

    For middle school students who have seen only pictures of dinosaurs in books, in the movies, or on the internet, trying to comprehend the size of these gargantuan animals can be difficult. This lesson provides a way for students to visualize changing scale through studying extinct organisms and to gain a deeper understanding of the history of the…

  10. Spectral scaling of the Leray-{alpha} model for two-dimensional turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Lunasin, Evelyn [Department of Mathematics, University of California, San Diego (UCSD), La Jolla, CA 92093 (United States); Kurien, Susan [Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Titi, Edriss S [Department of Mathematics and Department of Mechanical and Aerospace Engineering, University of California, Irvine (UCI), Irvine, CA 92697 (United States)], E-mail: elunasin@math.ucsd.edu, E-mail: skurien@lanl.gov, E-mail: etiti@math.uci.edu

    2008-08-29

    We present data from high-resolution numerical simulations of the Navier-Stokes-{alpha} and the Leray-{alpha} models for two-dimensional turbulence. It was shown previously (Lunasin et al 2007 J. Turbul. 8 30) that for wavenumbers k such that k{alpha} >> 1, the energy spectrum of the smoothed velocity field for the two-dimensional Navier-Stokes-{alpha} (NS-{alpha}) model scales as k{sup -7}. This result is in agreement with the scaling deduced by dimensional analysis of the flux of the conserved enstrophy using its characteristic time scale. We therefore hypothesize that the spectral scaling of any {alpha}-model in the sub-{alpha} spatial scales must depend only on the characteristic time scale and dynamics of the dominant cascading quantity in that regime of scales. The data presented here, from simulations of the two-dimensional Leray-{alpha} model, confirm our hypothesis. We show that for k{alpha} >> 1, the energy spectrum for the two-dimensional Leray-{alpha} scales as k{sup -5}, as expected by the characteristic time scale for the flux of the conserved enstrophy of the Leray-{alpha} model. These results lead to our conclusion that the dominant directly cascading quantity of the model equations must determine the scaling of the energy spectrum.

  11. Klobuchar-like Ionospheric Model for Different Scales Areas

    National Research Council Canada - National Science Library

    LIU Chen; LIU Changjian; FENG Xu; XU Lingfeng; DU Ying

    2017-01-01

    Nowadays, Klobuchar is the most widely used ionospheric model in the positioning based on single-frequency terminal, and its different refined models have been proposed for a higher and higher accuracy of positioning...

  12. Klobuchar-like Ionospheric Model for Different Scales Areas

    OpenAIRE

    LIU Chen; Liu, Changjian; Feng, Xu; Xu, Lingfeng; Ying DU

    2017-01-01

    Nowadays, Klobuchar is the most widely used ionospheric model in the positioning based on single-frequency terminal, and its different refined models have been proposed for a higher and higher accuracy of positioning. The variation of nighttime TEC with local time and the variation of TEC (total electron content) with latitude have been analyzed using GIMs. After summarizing the model refinement schemes with wide applications, we proposed a Klobuchar-like model for regions with different scal...

  13. Long-Run Properties of Large-Scale Macroeconometric Models

    OpenAIRE

    Kenneth F. WALLIS-; John D. WHITLEY

    1987-01-01

    We consider alternative approaches to the evaluation of the long-run properties of dynamic nonlinear macroeconometric models, namely dynamic simulation over an extended database, or the construction and direct solution of the steady-state version of the model. An application to a small model of the UK economy is presented. The model is found to be unstable, but a stable form can be produced by simple alterations to the structure.

  14. Digital terrain model generalization incorporating scale, semantic and cognitive constraints

    Science.gov (United States)

    Partsinevelos, Panagiotis; Papadogiorgaki, Maria

    2014-05-01

    Cartographic generalization is a well-known process accommodating spatial data compression, visualization and comprehension under various scales. In the last few years, there are several international attempts to construct tangible GIS systems, forming real 3D surfaces using a vast number of mechanical parts along a matrix formation (i.e., bars, pistons, vacuums). Usually, moving bars upon a structured grid push a stretching membrane resulting in a smooth visualization for a given surface. Most of these attempts suffer either in their cost, accuracy, resolution and/or speed. Under this perspective, the present study proposes a surface generalization process that incorporates intrinsic constrains of tangible GIS systems including robotic-motor movement and surface stretching limitations. The main objective is to provide optimized visualizations of 3D digital terrain models with minimum loss of information. That is, to minimize the number of pixels in a raster dataset used to define a DTM, while reserving the surface information. This neighborhood type of pixel relations adheres to the basics of Self Organizing Map (SOM) artificial neural networks, which are often used for information abstraction since they are indicative of intrinsic statistical features contained in the input patterns and provide concise and characteristic representations. Nevertheless, SOM remains more like a black box procedure not capable to cope with possible particularities and semantics of the application at hand. E.g. for coastal monitoring applications, the near - coast areas, surrounding mountains and lakes are more important than other features and generalization should be "biased"-stratified to fulfill this requirement. Moreover, according to the application objectives, we extend the SOM algorithm to incorporate special types of information generalization by differentiating the underlying strategy based on topologic information of the objects included in the application. The final

  15. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  16. Microphysics in the Multi-Scale Modeling Systems with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.

  17. Using Multi-Scale Modeling Systems to Study the Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo

    2010-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  18. The mechanical properties modeling of nano-scale materials by molecular dynamics

    NARCIS (Netherlands)

    Yuan, C.; Driel, W.D. van; Poelma, R.; Zhang, G.Q.

    2012-01-01

    We propose a molecular modeling strategy which is capable of mod-eling the mechanical properties on nano-scale low-dielectric (low-k) materials. Such modeling strategy has been also validated by the bulking force of carbon nano tube (CNT). This modeling framework consists of model generation method,

  19. Incorporating inductances in tissue-scale models of cardiac electrophysiology

    Science.gov (United States)

    Rossi, Simone; Griffith, Boyce E.

    2017-09-01

    In standard models of cardiac electrophysiology, including the bidomain and monodomain models, local perturbations can propagate at infinite speed. We address this unrealistic property by developing a hyperbolic bidomain model that is based on a generalization of Ohm's law with a Cattaneo-type model for the fluxes. Further, we obtain a hyperbolic monodomain model in the case that the intracellular and extracellular conductivity tensors have the same anisotropy ratio. In one spatial dimension, the hyperbolic monodomain model is equivalent to a cable model that includes axial inductances, and the relaxation times of the Cattaneo fluxes are strictly related to these inductances. A purely linear analysis shows that the inductances are negligible, but models of cardiac electrophysiology are highly nonlinear, and linear predictions may not capture the fully nonlinear dynamics. In fact, contrary to the linear analysis, we show that for simple nonlinear ionic models, an increase in conduction velocity is obtained for small and moderate values of the relaxation time. A similar behavior is also demonstrated with biophysically detailed ionic models. Using the Fenton-Karma model along with a low-order finite element spatial discretization, we numerically analyze differences between the standard monodomain model and the hyperbolic monodomain model. In a simple benchmark test, we show that the propagation of the action potential is strongly influenced by the alignment of the fibers with respect to the mesh in both the parabolic and hyperbolic models when using relatively coarse spatial discretizations. Accurate predictions of the conduction velocity require computational mesh spacings on the order of a single cardiac cell. We also compare the two formulations in the case of spiral break up and atrial fibrillation in an anatomically detailed model of the left atrium, and we examine the effect of intracellular and extracellular inductances on the virtual electrode phenomenon.

  20. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper

    2003-01-01

    Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied for im...... hydrodynamic interaction with structures. The examples also show that numerical model development benefits from international co-operation and sharing of high quality results.......Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...... for improvement of the reliability of physical model results. This paper demonstrates by examples that numerical modelling benefits in various ways from experimental studies (in large and small laboratory facilities). The examples range from very general hydrodynamic descriptions of wave phenomena to specific...

  1. The Design of a Fire Source in Scale-Model Experiments with Smoke Ventilation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Brohus, Henrik; la Cour-Harbo, H.

    2004-01-01

    The paper describes the design of a fire and a smoke source for scale-model experiments with smoke ventilation. It is only possible to work with scale-model experiments where the Reynolds number is reduced compared to full scale, and it is demonstrated that special attention to the fire source...... (heat and smoke source) may improve the possibility of obtaining Reynolds number independent solutions with a fully developed flow. The paper shows scale-model experiments for the Ofenegg tunnel case. Design of a fire source for experiments with smoke ventilation in a large room and smoke movement...

  2. Application of bioreactor design principles and multivariate analysis for development of cell culture scale down models.

    Science.gov (United States)

    Tescione, Lia; Lambropoulos, James; Paranandi, Madhava Ram; Makagiansar, Helena; Ryll, Thomas

    2015-01-01

    A bench scale cell culture model representative of manufacturing scale (2,000 L) was developed based on oxygen mass transfer principles, for a CHO-based process producing a recombinant human protein. Cell culture performance differences across scales are characterized most often by sub-optimal performance in manufacturing scale bioreactors. By contrast in this study, reduced growth rates were observed at bench scale during the initial model development. Bioreactor models based on power per unit volume (P/V), volumetric mass transfer coefficient (kL a), and oxygen transfer rate (OTR) were evaluated to address this scale performance difference. Lower viable cell densities observed for the P/V model were attributed to higher sparge rates and reduced oxygen mass transfer efficiency (kL a) of the small scale hole spargers. Increasing the sparger kL a by decreasing the pore size resulted in a further decrease in growth at bench scale. Due to sensitivity of the cell line to gas sparge rate and bubble size that was revealed by the P/V and kL a models, an OTR model based on oxygen enrichment and increased P/V was selected that generated endpoint sparge rates representative of 2,000 L scale. This final bench scale model generated similar growth rates as manufacturing. In order to take into account other routinely monitored process parameters besides growth, a multivariate statistical approach was applied to demonstrate validity of the small scale model. After the model was selected based on univariate and multivariate analysis, product quality was generated and verified to fall within the 95% confidence limit of the multivariate model. © 2014 Wiley Periodicals, Inc.

  3. Development of a scale down cell culture model using multivariate analysis as a qualification tool.

    Science.gov (United States)

    Tsang, Valerie Liu; Wang, Angela X; Yusuf-Makagiansar, Helena; Ryll, Thomas

    2014-01-01

    In characterizing a cell culture process to support regulatory activities such as process validation and Quality by Design, developing a representative scale down model for design space definition is of great importance. The manufacturing bioreactor should ideally reproduce bench scale performance with respect to all measurable parameters. However, due to intrinsic geometric differences between scales, process performance at manufacturing scale often varies from bench scale performance, typically exhibiting differences in parameters such as cell growth, protein productivity, and/or dissolved carbon dioxide concentration. Here, we describe a case study in which a bench scale cell culture process model is developed to mimic historical manufacturing scale performance for a late stage CHO-based monoclonal antibody program. Using multivariate analysis (MVA) as primary data analysis tool in addition to traditional univariate analysis techniques to identify gaps between scales, process adjustments were implemented at bench scale resulting in an improved scale down cell culture process model. Finally we propose an approach for small scale model qualification including three main aspects: MVA, comparison of key physiological rates, and comparison of product quality attributes.

  4. Multi-scale Modeling of Radiation Damage: Large Scale Data Analysis

    Science.gov (United States)

    Warrier, M.; Bhardwaj, U.; Bukkuru, S.

    2016-10-01

    Modification of materials in nuclear reactors due to neutron irradiation is a multiscale problem. These neutrons pass through materials creating several energetic primary knock-on atoms (PKA) which cause localized collision cascades creating damage tracks, defects (interstitials and vacancies) and defect clusters depending on the energy of the PKA. These defects diffuse and recombine throughout the whole duration of operation of the reactor, thereby changing the micro-structure of the material and its properties. It is therefore desirable to develop predictive computational tools to simulate the micro-structural changes of irradiated materials. In this paper we describe how statistical averages of the collision cascades from thousands of MD simulations are used to provide inputs to Kinetic Monte Carlo (KMC) simulations which can handle larger sizes, more defects and longer time durations. Use of unsupervised learning and graph optimization in handling and analyzing large scale MD data will be highlighted.

  5. Improvements to a global-scale groundwater model to estimate the water table across New Zealand

    Science.gov (United States)

    Westerhoff, Rogier; Miguez-Macho, Gonzalo; White, Paul

    2017-04-01

    Groundwater models at the global scale have become increasingly important in recent years to assess the effects of climate change and groundwater depletion. However, these global-scale models are typically not used for studies at the catchment scale, because they are simplified and too spatially coarse. In this study, we improved the global-scale Equilibrium Water Table (EWT) model, so it could better assess water table depth and water table elevation at the national scale for New Zealand. The resulting National Water Table (NWT) model used improved input data (i.e., national input data of terrain, geology, and recharge) and model equations (e.g., a hydraulic conductivity - depth relation). The NWT model produced maps of the water table that identified the main alluvial aquifers with fine spatial detail. Two regional case studies at the catchment scale demonstrated excellent correlation between the water table elevation and observations of hydraulic head. The NWT water tables are an improved water table estimation over the EWT model. In two case studies the NWT model provided a better approximation to observed water table for deep aquifers and the improved resolution of the model provided the capability to fill the gaps in data-sparse areas. This national model calculated water table depth and elevation across regional jurisdictions. Therefore, the model is relevant where trans-boundary issues, such as source protection and catchment boundary definition, occur. The NWT model also has the potential to constrain the uncertainty of catchment-scale models, particularly where data are sparse. Shortcomings of the NWT model are caused by the inaccuracy of input data and the simplified model properties. Future research should focus on improved estimation of input data (e.g., hydraulic conductivity and terrain). However, more advanced catchment-scale groundwater models should be used where groundwater flow is dominated by confining layers and fractures.

  6. Modeling and Analysis of Structural Dynamics for a One-Tenth Scale Model NGST Sunshield

    Science.gov (United States)

    Johnston, John; Lienard, Sebastien; Brodeur, Steve (Technical Monitor)

    2001-01-01

    New modeling and analysis techniques have been developed for predicting the dynamic behavior of the Next Generation Space Telescope (NGST) sunshield. The sunshield consists of multiple layers of pretensioned, thin-film membranes supported by deployable booms. Modeling the structural dynamic behavior of the sunshield is a challenging aspect of the problem due to the effects of membrane wrinkling. A finite element model of the sunshield was developed using an approximate engineering approach, the cable network method, to account for membrane wrinkling effects. Ground testing of a one-tenth scale model of the NGST sunshield were carried out to provide data for validating the analytical model. A series of analyses were performed to predict the behavior of the sunshield under the ground test conditions. Modal analyses were performed to predict the frequencies and mode shapes of the test article and transient response analyses were completed to simulate impulse excitation tests. Comparison was made between analytical predictions and test measurements for the dynamic behavior of the sunshield. In general, the results show good agreement with the analytical model correctly predicting the approximate frequency and mode shapes for the significant structural modes.

  7. WARM WATER SCALE MODEL EXPERIMENTS FOR MAGNESIUM DIE CASTING

    Energy Technology Data Exchange (ETDEWEB)

    Sabau, Adrian S [ORNL

    2006-01-01

    High-pressure die casting (HPDC) involves the filling of a cavity with the molten metal through a thin gate. High gate velocities yield jet break-up and atomization phenomena. In order to improve the quality of magnesium parts, the mold filling pattern, including atomization phenomena, needs to be understood. The goal of this study was to obtain experimental data on jet break-up characteristics for conditions similar to that of magnesium HPDC, and measure the droplet velocity and size distribution. A scale analysis is first presented in order to identify appropriate analogue for liquid magnesium alloys. Based on the scale analysis warm water was chosen as a suitable analogue and different nozzles were manufactured. A 2-D component phase Doppler particle analyzer (PDPA) and 2-D component particle image velocimetry (PIV) were then used to obtain fine particle diameter and velocity distributions in 2-D plane.

  8. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    Science.gov (United States)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  9. Large Scale Solar Heating:Evaluation, Modelling and Designing

    OpenAIRE

    Heller, Alfred; Svendsen, Svend; Furbo, Simon

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out based on measurements on the Marstal plant, Denmark, and through comparison with published and unpublished data from other plants. Evaluations on the thermal, economical and environmental performance...

  10. Modelling cloud effects on ozone on a regional scale : A case study

    NARCIS (Netherlands)

    Matthijsen, J.; Builtjes, P.J.H.; Meijer, E.W.; Boersen, G.

    1997-01-01

    We have investigated the influence of clouds on ozone on a regional scale (Europe) with a regional scale photochemical dispersion model (LOTOS). The LOTOS-model calculates ozone and other photo-oxidant concentrations in the lowest three km of the troposphere, using actual meteorologic data and

  11. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    time, especially with respect to large-scale transport models. The study described in this paper contributes to fill the gap by investigating the effects of uncertainty in socio-economic variables growth rate projections on large-scale transport model forecasts, using the Danish National Transport...

  12. Conclusions of the NATO ARW on Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  13. Scale-splitting error in complex automata models for reaction-diffusion systems

    NARCIS (Netherlands)

    Caiazzo, A.; Falcone, J.L.; Chopard, B.; Hoekstra, A.G.

    2008-01-01

    Complex Automata (CxA) have been recently proposed as a paradigm for the simulation of multiscale systems. A CxA model is constructed decomposing a multiscale process into single scale sub-models, each simulated using a Cellular Automata algorithm, interacting across the scales via appropriate

  14. A model-based framework for incremental scale-up of wastewater treatment processes

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Sin, Gürkan

    Scale-up is traditionally done following specific ratios or rules of thumb which do not lead to optimal results. We present a generic framework to assist in scale-up of wastewater treatment processes based on multiscale modelling, multiobjective optimisation and a validation of the model at the new...

  15. COMPUTATIONAL FLUID DYNAMICS MODELING OF SCALED HANFORD DOUBLE SHELL TANK MIXING - CFD MODELING SENSITIVITY STUDY RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    JACKSON VL

    2011-08-31

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

  16. Modelling the spreading of large-scale wildland fires

    CERN Document Server

    Drissi, Mohamed

    2014-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning cells that strongly depends on local conditions of wind, topography, and vegetation. Radiation and convection from the flaming zone, and radiative heat loss to the ambient are considered in the preheating process of unburned cells. Second, the model is applied to an Australian grassland fire experiment as well as to a real fire that took place in Corsica in 2009. Predictions compare favorably to experiments in terms of rate of spread, area and shape of the burn. Finally, the sensitivity of the model outcomes (here the rate of spread) to six input parameters is studied using a two-level full factorial design.

  17. Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.

    Science.gov (United States)

    Green, Sara; Batterman, Robert

    2017-02-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Lagrangian scheme to model subgrid-scale mixing and spreading in heterogeneous porous media

    Science.gov (United States)

    Herrera, P. A.; Cortínez, J. M.; Valocchi, A. J.

    2017-04-01

    Small-scale heterogeneity of permeability controls spreading, dilution, and mixing of solute plumes at large scale. However, conventional numerical simulations of solute transport are unable to resolve scales of heterogeneity below the grid scale. We propose a Lagrangian numerical approach to implement closure models to account for subgrid-scale spreading and mixing in Darcy-scale numerical simulations of solute transport in mildly heterogeneous porous media. The novelty of the proposed approach is that it considers two different dispersion coefficients to account for advective spreading mechanisms and local-scale dispersion. Using results of benchmark numerical simulations, we demonstrate that the proposed approach is able to model subgrid-scale spreading and mixing provided there is a correct choice of block-scale dispersion coefficient. We also demonstrate that for short travel times it is only possible to account for spreading or mixing using a single block-scale dispersion coefficient. Moreover, we show that it is necessary to use time-dependent dispersion coefficients to obtain correct mixing rates. On the contrary, for travel times that are large in comparison to the typical dispersive time scale, it is possible to use a single expression to compute the block-dispersion coefficient, which is equal to the asymptotic limit of the block-scale macrodispersion coefficient proposed by Rubin et al. (1999). Our approach provides a flexible and efficient way to model subgrid-scale mixing in numerical models of large-scale solute transport in heterogeneous aquifers. We expect that these findings will help to better understand the applicability of the advection-dispersion-equation (ADE) to simulate solute transport at the Darcy scale in heterogeneous porous media.

  19. Climate-Informed Multi-Scale Stochastic (CIMSS) Hydrological Modeling: Incorporating Decadal-Scale Variability Using Paleo Data

    Science.gov (United States)

    Thyer, M. A.; Henley, B. J.; Kuczera, G. A.

    2014-12-01

    Incorporating the influence of climate change and long-term climate variability in the estimation of drought risk is a priority for water resource planners. Australia's highly variable rainfall regime is influenced by ocean-atmosphere climate mechanisms which induce decadal-scale variability in hydrological data. This talk will summarize research on the identification of appropriate models for incorporating decadal scale variability into stochastic hydrological models. These will include autoregressive, hidden Markov models and a Bayesian hierarchical approach which combines paleo information on climate indices and hydrological data into a climate informed multi-time scale stochastic (CIMSS) framework. To characterize long-term variability for the first level of the hierarchy, paleoclimate and instrumental data describing the Interdecadal Pacific Oscillation (IPO) and the Pacific Decadal Oscillation (PDO) are analyzed. A new paleo IPO-PDO time series dating back 440 yr is produced, combining seven IPO-PDO paleo sources using an objective smoothing procedure to fit low-pass filters to individual records. The paleo data analysis indicates that wet/dry IPO-PDO states have a broad range of run lengths, with 90% between 3 and 33 yr and a mean of 15 yr. Model selection techniques were used to determine a suitable stochastic model to simulate these run lengths. For the second level of the hierarchy, a seasonal rainfall model is conditioned on the simulated IPO-PDO state. Application to two high quality rainfall sites close to water supply reservoirs found that mean seasonal rainfall in the IPO-PDO dry state was 15%-28% lower than the wet state. Furthermore, analysis of the impact of the CIMSS framework on drought risk analysis found that short-term drought risks conditional on IPO/PDO state were far higher than the traditional AR(1) model.

  20. Multi-scale modelling to evaluate building energy consumption at the neighbourhood scale

    Science.gov (United States)

    Coccolo, Silvia; Kaempf, Jérôme; Scartezzini, Jean-Louis

    2017-01-01

    A new methodology is proposed to couple a meteorological model with a building energy use model. The aim of such a coupling is to improve the boundary conditions of both models with no significant increase in computational time. In the present case, the Canopy Interface Model (CIM) is coupled with CitySim. CitySim provides the geometrical characteristics to CIM, which then calculates a high resolution profile of the meteorological variables. These are in turn used by CitySim to calculate the energy flows in an urban district. We have conducted a series of experiments on the EPFL campus in Lausanne, Switzerland, to show the effectiveness of the coupling strategy. First, measured data from the campus for the year 2015 are used to force CIM and to evaluate its aptitude to reproduce high resolution vertical profiles. Second, we compare the use of local climatic data and data from a meteorological station located outside the urban area, in an evaluation of energy use. In both experiments, we demonstrate the importance of using in building energy software, meteorological variables that account for the urban microclimate. Furthermore, we also show that some building and urban forms are more sensitive to the local environment. PMID:28880883

  1. Development of a Scale Model for High Flux Isotope Reactor Cycle 400

    Energy Technology Data Exchange (ETDEWEB)

    Ilas, Dan [ORNL

    2012-03-01

    The development of a comprehensive SCALE computational model for the High Flux Isotope Reactor (HFIR) is documented and discussed in this report. The SCALE model has equivalent features and functionality as the reference MCNP model for Cycle 400 that has been used extensively for HFIR safety analyses and for HFIR experiment design and analyses. Numerical comparisons of the SCALE and MCNP models for the multiplication constant, power density distribution in the fuel, and neutron fluxes at several locations in HFIR indicate excellent agreement between the results predicted with the two models. The SCALE HFIR model is presented in sufficient detail to provide the users of the model with a tool that can be easily customized for various safety analysis or experiment design requirements.

  2. Multi-scale modeling of urban air pollution: development of a Street-in-Grid model

    Science.gov (United States)

    Kim, Youngseob; Wu, You; Seigneur, Christian; Roustan, Yelva

    2016-04-01

    A new multi-scale model of urban air pollution is presented. This model combines a chemical-transport model (CTM) that includes a comprehensive treatment of atmospheric chemistry and transport at spatial scales greater than 1 km and a street-network model that describes the atmospheric concentrations of pollutants in an urban street network. The street-network model is based on the general formulation of the SIRANE model and consists of two main components: a street-canyon component and a street-intersection component. The street-canyon component calculates the mass transfer velocity at the top of the street canyon (roof top) and the mean wind velocity within the street canyon. The estimation of the mass transfer velocity depends on the intensity of the standard deviation of the vertical velocity at roof top. The effect of various formulations of this mass transfer velocity on the pollutant transport at roof-top level is examined. The street-intersection component calculates the mass transfer from a given street to other streets across the intersection. These mass transfer rates among the streets are calculated using the mean wind velocity calculated for each street and are balanced so that the total incoming flow rate is equal to the total outgoing flow rate from the intersection including the flow between the intersection and the overlying atmosphere at roof top. In the default option, the Leighton photostationary cycle among ozone (O3) and nitrogen oxides (NO and NO2) is used to represent the chemical reactions within the street network. However, the influence of volatile organic compounds (VOC) on the pollutant concentrations increases when the nitrogen oxides (NOx) concentrations are low. To account for the possible VOC influence on street-canyon chemistry, the CB05 chemical kinetic mechanism, which includes 35 VOC model species, is implemented in this street-network model. A sensitivity study is conducted to assess the uncertainties associated with the use of

  3. Evaluation of biogeochemical models at local and regional scale

    NARCIS (Netherlands)

    Kros, J.

    2002-01-01

    Additional index words: nutrient cycling, soil modelling, uncertainty analysis, calibration, scenario analysis, model error

    In this thesis different nutrient cycling and

  4. Misspecified poisson regression models for large-scale registry data

    DEFF Research Database (Denmark)

    Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.

    2016-01-01

    working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...

  5. Hydrological Modelling of Small Scale Processes in a Wetland Habitat

    DEFF Research Database (Denmark)

    Johansen, Ole; Jensen, Jacob Birk; Pedersen, Morten Lauge

    2009-01-01

    Numerical modelling of the hydrology in a Danish rich fen area has been conducted. By collecting various data in the field the model has been successfully calibrated and the flow paths as well as the groundwater discharge distribution have been simulated in details. The results of this work have ...

  6. A model for chlorophyll fluorescence and photosynthesis at leaf scale

    NARCIS (Netherlands)

    Tol, van der C.; Verhoef, W.; Rosema, A.

    2009-01-01

    This paper presents a leaf biochemical model for steady-state chlorophyll fluorescence and photosynthesis of C3 and C4 vegetation. The model is a tool to study the relationship between passively measured steady-state chlorophyll fluorescence and actual photosynthesis, and its evolution during the

  7. Modeling on the grand scale: LANDFIRE lessons learned

    Science.gov (United States)

    Kori Blankenship; Jim Smith; Randy Swaty; Ayn J. Shlisky; Jeannie Patton; Sarah. Hagen

    2012-01-01

    Between 2004 and 2009, the LANDFIRE project facilitated the creation of approximately 1,200 unique state-andtransition models (STMs) for all major ecosystems in the United States. The primary goal of the modeling effort was to create a consistent and comprehensive set of STMs describing reference conditions and to inform the mapping of a subset of LANDFIRE’s spatial...

  8. Reasoning with Atomic-Scale Molecular Dynamic Models

    Science.gov (United States)

    Pallant, Amy; Tinker, Robert F.

    2004-01-01

    The studies reported in this paper are an initial effort to explore the applicability of computational models in introductory science learning. Two instructional interventions are described that use a molecular dynamics model embedded in a set of online learning activities with middle and high school students in 10 classrooms. The studies indicate…

  9. European Coordinating Activities Concerning Local-Scale Regulatory Models

    DEFF Research Database (Denmark)

    Olesen, H. R.

    1994-01-01

    Proceedings of the Twentieth NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held November 29- December 3, 1993, in Valencia, Spain.......Proceedings of the Twentieth NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held November 29- December 3, 1993, in Valencia, Spain....

  10. Probabilistic models of population evolution scaling limits, genealogies and interactions

    CERN Document Server

    Pardoux, Étienne

    2016-01-01

    This expository book presents the mathematical description of evolutionary models of populations subject to interactions (e.g. competition) within the population. The author includes both models of finite populations, and limiting models as the size of the population tends to infinity. The size of the population is described as a random function of time and of the initial population (the ancestors at time 0). The genealogical tree of such a population is given. Most models imply that the population is bound to go extinct in finite time. It is explained when the interaction is strong enough so that the extinction time remains finite, when the ancestral population at time 0 goes to infinity. The material could be used for teaching stochastic processes, together with their applications. Étienne Pardoux is Professor at Aix-Marseille University, working in the field of Stochastic Analysis, stochastic partial differential equations, and probabilistic models in evolutionary biology and population genetics. He obtai...

  11. Extending the scope of models for large-scale structure formation in the Universe

    CERN Document Server

    Buchert, T; Pérez-Mercader, J; Buchert, Thomas; Dominguez, Alvaro; Perez-Mercader, Juan

    1999-01-01

    We propose a phenomenological generalization of the models of large-scale structure formation in the Universe by gravitational instability in two ways: we include pressure forces to model multi-streaming, and noise to model fluctuations due to neglected short-scale physical processes. We show that pressure gives rise to a viscous-like force of the same character as that one introduced in the ``adhesion model'', while noise leads to a roughening of the density field yielding a scaling behavior of its correlations.

  12. Universal model of individual and population mobility on diverse spatial scales.

    Science.gov (United States)

    Yan, Xiao-Yong; Wang, Wen-Xu; Gao, Zi-You; Lai, Ying-Cheng

    2017-11-21

    Studies of human mobility in the past decade revealed a number of general scaling laws. However, to reproduce the scaling behaviors quantitatively at both the individual and population levels simultaneously remains to be an outstanding problem. Moreover, recent evidence suggests that spatial scales have a significant effect on human mobility, raising the need for formulating a universal model suited for human mobility at different levels and spatial scales. Here we develop a general model by combining memory effect and population-induced competition to enable accurate prediction of human mobility based on population distribution only. A variety of individual and collective mobility patterns such as scaling behaviors and trajectory motifs are accurately predicted for different countries and cities of diverse spatial scales. Our model establishes a universal underlying mechanism capable of explaining a variety of human mobility behaviors, and has significant applications for understanding many dynamical processes associated with human mobility.

  13. Genome-Scale Model Reveals Metabolic Basis of Biomass Partitioning in a Model Diatom.

    Directory of Open Access Journals (Sweden)

    Jennifer Levering

    Full Text Available Diatoms are eukaryotic microalgae that contain genes from various sources, including bacteria and the secondary endosymbiotic host. Due to this unique combination of genes, diatoms are taxonomically and functionally distinct from other algae and vascular plants and confer novel metabolic capabilities. Based on the genome annotation, we performed a genome-scale metabolic network reconstruction for the marine diatom Phaeodactylum tricornutum. Due to their endosymbiotic origin, diatoms possess a complex chloroplast structure which complicates the prediction of subcellular protein localization. Based on previous work we implemented a pipeline that exploits a series of bioinformatics tools to predict protein localization. The manually curated reconstructed metabolic network iLB1027_lipid accounts for 1,027 genes associated with 4,456 reactions and 2,172 metabolites distributed across six compartments. To constrain the genome-scale model, we determined the organism specific biomass composition in terms of lipids, carbohydrates, and proteins using Fourier transform infrared spectrometry. Our simulations indicate the presence of a yet unknown glutamine-ornithine shunt that could be used to transfer reducing equivalents generated by photosynthesis to the mitochondria. The model reflects the known biochemical composition of P. tricornutum in defined culture conditions and enables metabolic engineering strategies to improve the use of P. tricornutum for biotechnological applications.

  14. Genome-Scale Model Reveals Metabolic Basis of Biomass Partitioning in a Model Diatom.

    Science.gov (United States)

    Levering, Jennifer; Broddrick, Jared; Dupont, Christopher L; Peers, Graham; Beeri, Karen; Mayers, Joshua; Gallina, Alessandra A; Allen, Andrew E; Palsson, Bernhard O; Zengler, Karsten

    2016-01-01

    Diatoms are eukaryotic microalgae that contain genes from various sources, including bacteria and the secondary endosymbiotic host. Due to this unique combination of genes, diatoms are taxonomically and functionally distinct from other algae and vascular plants and confer novel metabolic capabilities. Based on the genome annotation, we performed a genome-scale metabolic network reconstruction for the marine diatom Phaeodactylum tricornutum. Due to their endosymbiotic origin, diatoms possess a complex chloroplast structure which complicates the prediction of subcellular protein localization. Based on previous work we implemented a pipeline that exploits a series of bioinformatics tools to predict protein localization. The manually curated reconstructed metabolic network iLB1027_lipid accounts for 1,027 genes associated with 4,456 reactions and 2,172 metabolites distributed across six compartments. To constrain the genome-scale model, we determined the organism specific biomass composition in terms of lipids, carbohydrates, and proteins using Fourier transform infrared spectrometry. Our simulations indicate the presence of a yet unknown glutamine-ornithine shunt that could be used to transfer reducing equivalents generated by photosynthesis to the mitochondria. The model reflects the known biochemical composition of P. tricornutum in defined culture conditions and enables metabolic engineering strategies to improve the use of P. tricornutum for biotechnological applications.

  15. Modelling soil carbon movement by erosion over large scales and long time periods

    Science.gov (United States)

    Quinton, John; Davies, Jessica; Tipping, Ed

    2014-05-01

    Agricultural intensification accelerates physical erosion rates and the transport of carbon within the landscape. In order to improve understanding of how past, present and future anthropogenic land-use change has and will influence carbon and nutrient cycling, it is necessary to develop quantitative tools that can predict soil erosion and carbon movement at large temporal and spatial scales, that are consistent with the time constants of biogeochemical processes and the spatial scales of land-use change and natural resources. However, representing erosion and its impact on the carbon cycle over large spatial scales and long time periods is challenging. Erosion and sediment transport processes operate at multiple spatial and temporal scales with splash erosion dominating at the sub-plot scale and occurring within seconds, up to gully formation operating at field-catchment scales over days to months. In addition, most erosion production observations are made at the experimental plot scale, where fine time scales and detailed processes dominate. This is coupled with complexities associated with carbon detachment, decomposition and uncertainties surrounding carbon burial rates and stability - all of which occur over widely different temporal and spatial scales. As such, these data cannot be simply scaled to inform erosion and carbon representation at the regional scale, where topography, vegetation cover and landscape organisation become more important controls on sediment fluxes. We have developed a simple energy-based regional scale method of soil erosion modelling, which is integration into a hydro-biogeochemical model that will simulate carbon, nitrogen and phosphorus pools and fluxes across the UK from the industrial revolution to the present day. The model is driven by overland flow, dynamic vegetation cover, soil properties, and topographic distributions and produces sediment production and yield at the 5km grid scale. In this paper we will introduce the

  16. Modeling Physical Processes at Galactic Scales and Above

    Energy Technology Data Exchange (ETDEWEB)

    Gnedin, Nickolay Y. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2014-12-16

    What should these lectures be? The subject is so broad that many books can be written about it. I decided to prepare these lectures as if I were teaching my own graduate student. Given my research interests, I selected what the student would need to know to be able to discuss science with me and to work on joint research projects. So, the story presented below is both personal and incomplete, but it does cover several subjects that are poorly represented in the existing textbooks (if at all). Some of topics I focus on below are closely connected, others are disjoint, some are just side detours on specific technical questions. There is an overlapping theme, however. Our goal is to follow the cosmic gas from large scales, low densities, (relatively) simple physics to progressively smaller scales, higher densities, closer relation to galaxies, and more complex and uncertain physics. We follow a "yellow brick road" from the gas well beyond any galaxy confines to the actual sites of star formation and stellar feedback. On the way we will stop at some places for a tour and run without looking back through some others. So, the road will be uneven. The organization of the material is as follows: physics of the intergalactic medium, from intergalactic medium to circumgalactic medium, interstellar medium: gas in galaxies, star formation, and stellar feedback.

  17. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  18. Model Reduction Based on a Numerical Length Scale Analysis

    Science.gov (United States)

    Winkler, Niklas; Fuchs, Laszlo

    For the time being, the required computational cost to solve the 3D time dependent flow prevents the use of such methods for internal flows at high Reynolds number in complex geometries. In this work we present a method based on a numerical length scale analysis to get a rational reduction of the full 3D governing equations for turbulent pipe flows. The length scale analysis quantifies the terms of the governing equations after changing the coordinate system into a curvilinear coordinate system with one coordinate aligned with the flow path. By retaining the most important terms or neglecting the (significantly) smallest terms, different reductions may be attained. The results for a double bent pipe, used to illustrate the approach, show that the most significant component of the viscous terms is the normal component. The convective terms are all important. The normal component is significant in the bends of the pipe due to centrifugal forces, while the spanwise component is most significant after the second bend due to a swirling motion.

  19. Nonlinear Synapses for Large-Scale Models: An Efficient Representation Enables Complex Synapse Dynamics Modeling in Large-Scale Simulations

    Directory of Open Access Journals (Sweden)

    Eric eHu

    2015-09-01

    Full Text Available Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  20. Density-temperature scaling of the fragility in a model glass-former

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Sengupta, Shiladitya; Sastry, Srikanth

    2013-01-01

    . Such a scaling, referred to as density-temperature (DT) scaling, is exact for liquids with inverse power law (IPL) interactions but has also been found to be approximately valid in many non-IPL liquids. We have analyzed the consequences of DT scaling on the density dependence of the fragility in a model glass......Dynamical quantities e.g. diffusivity and relaxation time for some glass-formers may depend on density and temperature through a specific combination, rather than independently, allowing the representation of data over ranges of density and temperature as a function of a single scaling variable......-former. We find the density dependence of kinetic fragility to be weak, and show that it can be understood in terms of DT scaling and deviations of DT scaling at low densities. We also show that the Adam-Gibbs relation exhibits DT scaling and the scaling exponent computed from the density dependence...

  1. Simple neoclassical point model for transport and scaling in EBT

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, C.L.; Jaeger, E.F.; Spong, D.A.; Guest, G.E.; Krall, N.A.; McBride, J.B.; Stuart, G.W.

    1977-04-01

    A simple neoclassical point model is presented for the ELMO Bumpy Torus experiment. Solutions for steady state are derived. Comparison with experimental observations is made and reasonable agreement is obtained.

  2. Multi-length Scale Material Model Development for Armorgrade Composites

    Science.gov (United States)

    2014-05-02

    Enriched Continuum-Level Material Model for Kevlar ®- Fiber -Reinforced Polymer-Matrix Composites, Journal of Materials Engineering and Performance, (03... Fiber -Level Modeling of Dynamic Strength of Kevlar ® KM2 Ballistic Fabric, Journal of Materials Engineering and Performance, (07 2011): 0. doi: 10.1007...high specific-strength, high specific-stiffness p-phenylene terephthalamide (PPTA) polymeric fiber /filament (e.g. Kevlar ®, Twaron®, etc.) based

  3. Numerical modeling of aluminium foam on two scales

    Czech Academy of Sciences Publication Activity Database

    Němeček, J.; Denk, F.; Zlámal, Petr

    2015-01-01

    Roč. 267, September (2015), s. 506-516 ISSN 0096-3003 R&D Projects: GA ČR(CZ) GAP105/12/0824 Institutional support: RVO:68378297 Keywords : closed-cell aluminium foam * Alporas * multiscale modeling * homogenization * FFT * finite element modeling Subject RIV: JI - Composite Materials Impact factor: 1.345, year: 2015 http://www.sciencedirect.com/science/article/pii/S0096300315001162

  4. Large scale structures and the cubic galileon model

    CERN Document Server

    Bhattacharya, Sourav; Tomaras, Theodore N

    2015-01-01

    The maximum size of a bound cosmic structure is computed perturbatively as a function of its mass in the framework of the cubic galileon, proposed recently to model the dark energy of our Universe. Comparison of our results with observations constrains the matter-galileon coupling of the model to $0.03\\lesssim \\alpha \\lesssim 0.17$, thus improving previous bounds based solely on solar system physics.

  5. Modelling expected train passenger delays on large scale railway networks

    DEFF Research Database (Denmark)

    Landex, Alex; Nielsen, Otto Anker

    2006-01-01

    Forecasts of regularity for railway systems have traditionally – if at all – been computed for trains, not for passengers. Relatively recently it has become possible to model and evaluate the actual passenger delays by a passenger regularity model for the operation already carried out. First the ...... and compare future scenarios. In this way it is possible to estimate the network effects of the passengers and to identify critical stations or sections in the railway network for further investigation or optimization.......Forecasts of regularity for railway systems have traditionally – if at all – been computed for trains, not for passengers. Relatively recently it has become possible to model and evaluate the actual passenger delays by a passenger regularity model for the operation already carried out. First...... the paper describes the passenger regularity model used to calculate passenger delays of the Copenhagen suburban rail network the previous day. Secondly, the paper describes how it is possible to estimate future passenger delays by combining the passenger regularity model with railway simulation software...

  6. Linking Fine-Scale Observations and Model Output with Imagery at Multiple Scales

    Science.gov (United States)

    Sadler, J.; Walthall, C. L.

    2014-12-01

    The development and implementation of a system for seasonal worldwide agricultural yield estimates is underway with the international Group on Earth Observations GeoGLAM project. GeoGLAM includes a research component to continually improve and validate its algorithms. There is a history of field measurement campaigns going back decades to draw upon for ways of linking surface measurements and model results with satellite observations. Ground-based, in-situ measurements collected by interdisciplinary teams include yields, model inputs and factors affecting scene radiation. Data that is comparable across space and time with careful attention to calibration is essential for the development and validation of agricultural applications of remote sensing. Data management to ensure stewardship, availability and accessibility of the data are best accomplished when considered an integral part of the research. The expense and logistical challenges of field measurement campaigns can be cost-prohibitive and because of short funding cycles for research, access to consistent, stable study sites can be lost. The use of a dedicated staff for baseline data needed by multiple investigators, and conducting measurement campaigns using existing measurement networks such as the USDA Long Term Agroecosystem Research network can fulfill these needs and ensure long-term access to study sites.

  7. Advanced modeling to accelerate the scale up of carbon capture technologies

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C.; Sun, XIN; Storlie, Curtis B.; Bhattacharyya, Debangsu

    2015-06-01

    In order to help meet the goals of the DOE carbon capture program, the Carbon Capture Simulation Initiative (CCSI) was launched in early 2011 to develop, demonstrate, and deploy advanced computational tools and validated multi-scale models to reduce the time required to develop and scale-up new carbon capture technologies. This article focuses on essential elements related to the development and validation of multi-scale models in order to help minimize risk and maximize learning as new technologies progress from pilot to demonstration scale.

  8. Simulation of large-scale rule-based models

    Energy Technology Data Exchange (ETDEWEB)

    Hlavacek, William S [Los Alamos National Laboratory; Monnie, Michael I [Los Alamos National Laboratory; Colvin, Joshua [NON LANL; Faseder, James [NON LANL

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  9. Drift Scale Modeling: Study of Unsaturated Flow into a Drift Using a Stochastic Continuum Model

    Energy Technology Data Exchange (ETDEWEB)

    Birkholzer, J.T.; Tsang, C.F.; Tsang, Y.W.; Wang, J.S

    1996-09-01

    Unsaturated flow in heterogeneous fractured porous rock was simulated using a stochastic continuum model (SCM). In this model, both the more conductive fractures and the less permeable matrix are generated within the framework of a single continuum stochastic approach, based on non-parametric indicator statistics. High-permeable fracture zones are distinguished from low-permeable matrix zones in that they have assigned a long range correlation structure in prescribed directions. The SCM was applied to study small-scale flow in the vicinity of an access tunnel, which is currently being drilled in the unsaturated fractured tuff formations at Yucca Mountain, Nevada. Extensive underground testing is underway in this tunnel to investigate the suitability of Yucca Mountain as an underground nuclear waste repository. Different flow scenarios were studied in the present paper, considering the flow conditions before and after the tunnel emplacement, and assuming steady-state net infiltration as well as episodic pulse infiltration. Although the capability of the stochastic continuum model has not yet been fully explored, it has been demonstrated that the SCM is a good alternative model feasible of describing heterogeneous flow processes in unsaturated fractured tuff at Yucca Mountain.

  10. Small scale water recycling systems--risk assessment and modelling.

    Science.gov (United States)

    Diaper, C; Dixon, A; Bulier, D; Fewkes, A; Parsons, S A; Strathern, M; Stephenson, T; Strutt, J

    2001-01-01

    This paper aims to use quantitative risk analysis, risk modelling and simulation modelling tools to assess the performance of a proprietary single house grey water recycling system. A preliminary Hazard and Operability study (HAZOP) identified the main hazards, both health related and economic, associated with installing the recycling system in a domestic environment. The health related consequences of system failure were associated with the presence of increased concentrations of micro-organisms at the point of use, due to failure of the disinfection system and/or the pump. The risk model was used to assess the increase in the probability of infection for a particular genus of micro-organism, Salmonella spp, during disinfection failure. The increase in the number of cases of infection above a base rate rose from 0.001% during normal operation, to 4% for a recycling system with no disinfection. The simulation model was used to examine the possible effects of pump failure. The model indicated that the anaerobic COD release rate in the system storage tank increases over time and dissolved oxygen decreases during this failure mode. These conditions are likely to result in odour problems.

  11. From points to patterns - Transferring point scale groundwater measurements to catchment scale response patterns using time series modeling

    Science.gov (United States)

    Rinderer, M.; McGlynn, B. L.; van Meerveld, I. H. J.

    2015-12-01

    Detailed groundwater measurements across a catchment can provide information on subsurface stormflow generation and hydrologic connectivity of hillslopes to the stream network. However, groundwater dynamics can be highly variable in space and time, especially in steep headwater catchments. Prediction of groundwater response patterns at non-monitored sites requires transferring point scale information to the catchment scale through analysis of continuous groundwater level time series and their relationships to covariates such as topographic indices or landscape position. We applied time series analysis to a 4 year dataset of continuous groundwater level data for 51 wells distributed across a 20 ha pre-alpine headwater catchment in Switzerland to address the following questions: 1) Is the similarity or difference between the groundwater time series related to landscape position? 2) How does the relationship between groundwater dynamics and landscape position change across long (seasonal) and shorter (event) time scales and varying antecedent wetness conditions? 3) How can time series modeling be used to predict groundwater responses at non-monitored sites? We employed hierarchical clustering of the observed groundwater time series using both dynamic time warping and correlation based distance matrices. Based on the common site characteristics of the members of each cluster, the time series models were transferred to all non-monitored sites. This categorical approach provided maps of spatio-temporal groundwater dynamics across the entire catchment. We further developed a continuous approach based on process-based hydrological modeling and water table dynamic similarity. We suggest that continuous measurements at representative points and subsequent time series analysis can shed light into groundwater dynamics at the landscape scale and provide new insights into space-time patterns of hydrologic connectivity and streamflow generation.

  12. Scaling between periodic Anderson and Kondo lattice models

    Science.gov (United States)

    Dong, R.; Otsuki, J.; Savrasov, S. Y.

    2013-04-01

    Continuous-time quantum Monte Carlo method combined with dynamical mean field theory is used to calculate both periodic Anderson model (PAM) and Kondo lattice model (KLM). Different parameter sets of both models are connected by the Schrieffer-Wolff transformation. For degeneracy N=2, a special particle-hole symmetric case of PAM at half filling which always fixes one electron per impurity site is compared with the results of the KLM. We find a good mapping between PAM and KLM in the limit of large on-site Hubbard interaction U for different properties like self-energy, quasiparticle residue and susceptibility. This allows us to extract quasiparticle mass renormalizations for the f electrons directly from KLM. The method is further applied to higher degenerate case and to realistic heavy fermion system CeRhIn5 in which the estimate of the Sommerfeld coefficient is proven to be close to the experimental value.

  13. Validation of Simulation Model for Full Scale Wave Simulator and Discrete Fuild Power PTO System

    DEFF Research Database (Denmark)

    Hansen, Anders Hedegaard; Pedersen, Henrik C.; Hansen, Rico Hjerm

    2014-01-01

    In controller development for large scale machinery a good simulation model may serve as a time and money saving factor as well as a safety precaution. Having good models enables the developer to design and test control strategies in a safe and possibly less time consuming environment....... For applicable control strategies to take form in a simulation environment the model must with reasonable accuracy model the real system. The current paper presents a simulation model for a full scale wave simulator and a discrete fluid power Power Take Off (PTO) system. Good correlation is seen between...... the simulation model and the physical machine. Hence, this model may serve as a great bacis for model based controller development and for scaling the PTO system to a full wave energy converter....

  14. Canonical fitness model for simple scale-free graphs

    OpenAIRE

    Flegel, F.; Sokolov, I. M.

    2012-01-01

    We consider a fitness model assumed to generate simple graphs with power-law heavy-tailed degree sequence: P(k) \\propto k^{-1-\\alpha} with 0 < \\alpha < 1, in which the corresponding distributions do not posses a mean. We discuss the situations in which the model is used to produce a multigraph and examine what happens if the multiple edges are merged to a single one and thus a simple graph is built. We give the relation between the (normalized) fitness parameter r and the expected degree \

  15. Model Selection and Hypothesis Testing for Large-Scale Network Models with Overlapping Groups

    Directory of Open Access Journals (Sweden)

    Tiago P. Peixoto

    2015-03-01

    Full Text Available The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties are often an essential ingredient of network formation.

  16. Manufacturing and design of the offshore structure Froude scale model related to basin restrictions

    Science.gov (United States)

    Scurtu, I. C.

    2015-11-01

    Manufacturing steps for a modern three - column semi-submersible structure are delivered using CFD/CAE software and actual Froude scaled model testing. The three- column offshore is part of the Wind Float Project already realized as prototype for wind energy extraction in water depths more than 40 meters, and the actual model will not consider the wind turbine. The model will have heave plates for a smaller heave motion in order to compare it with the case without heave plates. The heave plates will be part of the Froude scale model.. Using a smaller model will determine a smaller heave motion and this will affect predictions of the vertical movement of the three- column offshore structure in real sea. The Froude criterion is used for the time, speed and acceleration scale. The scale model is manufactured from steel and fiberglass and all parts are subjected to software analysis in order to get the smallest stress in connections inside the model. The model mass was restricted by scale dimensions and also the vertical position of centre gravity will be considered during the manufacturing and design process of the Froude scale offshore structure. All conditions must converge in model manufacturing and design in order to get the best results to compare with real sea states and heave motion data.

  17. Physical consistency of subgrid-scale models for large-eddy simulation of incompressible turbulent flows

    CERN Document Server

    Silvis, Maurits H; Verstappen, Roel

    2016-01-01

    We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is p...

  18. Catchment-scale hydrological modeling and data assimilation

    NARCIS (Netherlands)

    Troch, P.A.A.; Paniconi, C.; McLaughlin, D.

    2003-01-01

    This special issue of Advances in Water Resources presents recent progress in the application of DA (data assimilation) for distributed hydrological modeling and in the use of in situ and remote sensing datasets for hydrological analysis and parameter estimation. The papers were presented at the De

  19. Energy-aware semantic modeling in large scale infrastructures

    NARCIS (Netherlands)

    Zhu, H.; van der Veldt, K.; Grosso, P.; Zhao, Z.; Liao, X.; de Laat, C.

    2012-01-01

    Including the energy profile of the computing infrastructure in the decision process for scheduling computing tasks and allocating resources is essential to improve the system energy efficiency. However, the lack of an effective model of the infrastructure energy information makes it difficult for

  20. A model of socioemotional flexibility at three time scales

    NARCIS (Netherlands)

    Hollenstein, T.P.; Lichtwarck-Aschoff, A.; Potworowski, G.

    2013-01-01

    The construct of flexibility has been a focus for research and theory for over 100 years. However, flexibility has not been consistently or adequately defined, leading to obstacles in the interpretation of past research and progress toward enhanced theory. We present a model of socioemotional

  1. Dynamic modelling of heavy metals - time scales and target loads

    NARCIS (Netherlands)

    Posch, M.; Vries, de W.

    2009-01-01

    Over the past decade steady-state methods have been developed to assess critical loads of metals avoiding long-term risks in view of food quality and eco-toxicological effects on organisms in soils and surface waters. However, dynamic models are needed to estimate the times involved in attaining a

  2. Large scale semantic 3D modeling of the urban landscape

    NARCIS (Netherlands)

    Esteban Lopez, I.

    2012-01-01

    Modeling and understanding large urban areas is becoming an important topic in a world were everything is being digitized. A semantic and accurate 3D representation of a city can be used in many applications such as event and security planning and management, assisted navigation, autonomous

  3. Complex Automata: Multi-scale Modeling with Coupled Cellular Automata

    NARCIS (Netherlands)

    Hoekstra, A.G.; Caiazzo, A.; Lorenz, E.; Falcone, J.-L.; Chopard, B.; Hoekstra, A.G.; Kroc, J.; Sloot, P.M.A.

    2010-01-01

    Cellular Automata (CA) are generally acknowledged to be a powerful way to describe and model natural phenomena [1-3]. There are even tempting claims that nature itself is one big (quantum) information processing system, e.g. [4], and that CA may actually be nature’s way to do this processing [5-7].

  4. Scale-free random graphs and Potts model

    Indian Academy of Sciences (India)

    real-world networks such as the world-wide web, the Internet, the coauthorship, the protein interaction networks and so on display power-law behaviors in the degree ... in this paper, we study the evolution of SF random graphs from the perspective of equilibrium statistical physics. The formulation in terms of the spin model ...

  5. Massive-scale tree modelling from TLS data

    NARCIS (Netherlands)

    Raumonen, P.; Casella, E.; Calders, K.; Murphy, S.; Åkerblom, M.; Kaasalainen, M.

    2015-01-01

    This paper presents a method for reconstructing automatically the quantitative structure model of every tree in a forest plot from terrestrial laser scanner data. A new feature is the automatic extraction of individual trees from the point cloud. The method is tested with a 30-m diameter English oak

  6. On Spatial Resolution in Habitat Models: Can Small-scale Forest Structure Explain Capercaillie Numbers?

    Directory of Open Access Journals (Sweden)

    Ilse Storch

    2002-06-01

    Full Text Available This paper explores the effects of spatial resolution on the performance and applicability of habitat models in wildlife management and conservation. A Habitat Suitability Index (HSI model for the Capercaillie (Tetrao urogallus in the Bavarian Alps, Germany, is presented. The model was exclusively built on non-spatial, small-scale variables of forest structure and without any consideration of landscape patterns. The main goal was to assess whether a HSI model developed from small-scale habitat preferences can explain differences in population abundance at larger scales. To validate the model, habitat variables and indirect sign of Capercaillie use (such as feathers or feces were mapped in six study areas based on a total of 2901 20 m radius (for habitat variables and 5 m radius sample plots (for Capercaillie sign. First, the model's representation of Capercaillie habitat preferences was assessed. Habitat selection, as expressed by Ivlev's electivity index, was closely related to HSI scores, increased from poor to excellent habitat suitability, and was consistent across all study areas. Then, habitat use was related to HSI scores at different spatial scales. Capercaillie use was best predicted from HSI scores at the small scale. Lowering the spatial resolution of the model stepwise to 36-ha, 100-ha, 400-ha, and 2000-ha areas and relating Capercaillie use to aggregated HSI scores resulted in a deterioration of fit at larger scales. Most importantly, there were pronounced differences in Capercaillie abundance at the scale of study areas, which could not be explained by the HSI model. The results illustrate that even if a habitat model correctly reflects a species' smaller scale habitat preferences, its potential to predict population abundance at larger scales may remain limited.

  7. Experiments to investigate direct containment heating phenomena with scaled models of the Surry Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Blanchat, T.K.; Allen, M.D.; Pilch, M.M. [Sandia National Labs., Albuquerque, NM (United States); Nichols, R.T. [Ktech Corp., Albuquerque, NM (United States)

    1994-06-01

    The Containment Technology Test Facility (CTTF) and the Surtsey Test Facility at Sandia National Laboratories are used to perform scaled experiments that simulate High Pressure Melt Ejection accidents in a nuclear power plant (NPP). These experiments are designed to investigate the effects of direct containment heating (DCH) phenomena on the containment load. High-temperature, chemically reactive melt (thermite) is ejected by high-pressure steam into a scale model of a reactor cavity. Debris is entrained by the steam blowdown into a containment model where specific phenomena, such as the effect of subcompartment structures, prototypic air/steam/hydrogen atmospheres, and hydrogen generation and combustion, can be studied. Four Integral Effects Tests (IETs) have been performed with scale models of the Surry NPP to investigate DCH phenomena. The 1/61{sup th} scale Integral Effects Tests (IET-9, IET-10, and IET-11) were conducted in CTRF, which is a 1/6{sup th} scale model of the Surry reactor containment building (RCB). The 1/10{sup th} scale IET test (IET-12) was performed in the Surtsey vessel, which had been configured as a 1/10{sup th} scale Surry RCB. Scale models were constructed in each of the facilities of the Surry structures, including the reactor pressure vessel, reactor support skirt, control rod drive missile shield, biological shield wall, cavity, instrument tunnel, residual heat removal platform and heat exchangers, seal table room and seal table, operating deck, and crane wall. This report describes these experiments and gives the results.

  8. Numerical approach for modelling across scales infusion-based processing of aircraft primary structures

    Science.gov (United States)

    Andriamananjara, K.; Chevalier, L.; Moulin, N.; Bruchon, J.; Liotier, P.-J.; Drapier, S.

    2017-10-01

    This study aims to establish a numerical strategy allowing to take into account the capillary and wetting issues, considered on the macroscopic scale as a discontinuity of pressure at the fluid-gas interface, and surface tension force balance at the local scale. This modelling is based on the Brinkman/Darcy and Stokes equations solved by a finite element stabilized method. Specific numerical methods are implemented to deal with the discontinuity of pressure field across the flow front. One of the challenges lies in modelling across scales capillary force effects in infusion-based processes to scale-up rules for flows at the process scale, because the computation cost of numerical simulations at local scales is too not tractable industrially.

  9. Scaling and criticality in a stochastic multi-agent model of a financial market

    Science.gov (United States)

    Lux, Thomas; Marchesi, Michele

    1999-02-01

    Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way - from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent `efficient market hypothesis' in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the `input' signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the `news arrival process' in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents.

  10. Incorporating Protein Biosynthesis into the Saccharomyces cerevisiae Genome-scale Metabolic Model

    DEFF Research Database (Denmark)

    Olivares Hernandez, Roberto

    Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been......, translation initiation, translation elongation, translation termination, translation elongation, and mRNA decay. Considering these information from the mechanisms of transcription and translation, we will include this stoichiometric reactions into the genome scale model for S. Cerevisiae to obtain the first...

  11. Modeling of short scale turbulence in the solar wind

    Directory of Open Access Journals (Sweden)

    V. Krishan

    2005-01-01

    Full Text Available The solar wind serves as a laboratory for investigating magnetohydrodynamic turbulence under conditions irreproducible on the terra firma. Here we show that the frame work of Hall magnetohydrodynamics (HMHD, which can support three quadratic invariants and allows nonlinear states to depart fundamentally from the Alfvénic, is capable of reproducing in the inertial range the three branches of the observed solar wind magnetic fluctuation spectrum - the Kolmogorov branch f -5/3 steepening to f -α1 with on the high frequency side and flattening to f -1 on the low frequency side. These fluctuations are found to be associated with the nonlinear Hall-MHD Shear Alfvén waves. The spectrum of the concomitant whistler type fluctuations is very different from the observed one. Perhaps the relatively stronger damping of the whistler fluctuations may cause their unobservability. The issue of equipartition of energy through the so called Alfvén ratio acquires a new status through its dependence, now, on the spatial scale.

  12. Neural assembly models derived through nano-scale measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Hongyou; Branda, Catherine; Schiek, Richard Louis; Warrender, Christina E.; Forsythe, James Chris

    2009-09-01

    This report summarizes accomplishments of a three-year project focused on developing technical capabilities for measuring and modeling neuronal processes at the nanoscale. It was successfully demonstrated that nanoprobes could be engineered that were biocompatible, and could be biofunctionalized, that responded within the range of voltages typically associated with a neuronal action potential. Furthermore, the Xyce parallel circuit simulator was employed and models incorporated for simulating the ion channel and cable properties of neuronal membranes. The ultimate objective of the project had been to employ nanoprobes in vivo, with the nematode C elegans, and derive a simulation based on the resulting data. Techniques were developed allowing the nanoprobes to be injected into the nematode and the neuronal response recorded. To the authors's knowledge, this is the first occasion in which nanoparticles have been successfully employed as probes for recording neuronal response in an in vivo animal experimental protocol.

  13. Process-scale modeling of elevated wintertime ozone in Wyoming.

    Energy Technology Data Exchange (ETDEWEB)

    Kotamarthi, V. R.; Holdridge, D. J.; Environmental Science Division

    2007-12-31

    Measurements of meteorological variables and trace gas concentrations, provided by the Wyoming Department of Environmental Quality for Daniel, Jonah, and Boulder Counties in the state of Wyoming, were analyzed for this project. The data indicate that highest ozone concentrations were observed at temperatures of -10 C to 0 C, at low wind speeds of about 5 mph. The median values for nitrogen oxides (NOx) during these episodes ranged between 10 ppbv and 20 ppbv (parts per billion by volume). Measurements of volatile organic compounds (VOCs) during these periods were insufficient for quantitative analysis. The few available VOCs measurements indicated unusually high levels of alkanes and aromatics and low levels of alkenes. In addition, the column ozone concentration during one of the high-ozone episodes was low, on the order of 250 DU (Dobson unit) as compared to a normal column ozone concentration of approximately 300-325 DU during spring for this region. Analysis of this observation was outside the scope of this project. The data analysis reported here was used to establish criteria for making a large number of sensitivity calculations through use of a box photochemical model. Two different VOCs lumping schemes, RACM and SAPRC-98, were used for the calculations. Calculations based on this data analysis indicated that the ozone mixing ratios are sensitive to (a) surface albedo, (b) column ozone, (c) NOx mixing ratios, and (d) available terminal olefins. The RACM model showed a large response to an increase in lumped species containing propane that was not reproduced by the SAPRC scheme, which models propane as a nearly independent species. The rest of the VOCs produced similar changes in ozone in both schemes. In general, if one assumes that measured VOCs are fairly representative of the conditions at these locations, sufficient precursors might be available to produce ozone in the range of 60-80 ppbv under the conditions modeled.

  14. PECASE - Multi-Scale Experiments and Modeling in Wall Turbulence

    Science.gov (United States)

    2014-12-23

    roughness, vibrations, non -alignment of the different sections of the pipe, thermal effects, as well as taking into account the effects not modeled by...flows. In particular, work is ongoing to consider adaptation of the formulation to consider rough-wall, non - Newtonian and compressible flows, and...image velocimetry measurements in turbulent boundary layers. J. Fluid Mech., 541:21–54, 2005. Y. Hwang and C. Cossu. Linear non -normal energy

  15. Improving large-scale groundwater models by considering fossil gradients

    Science.gov (United States)

    Schulz, Stephan; Walther, Marc; Michelsen, Nils; Rausch, Randolf; Dirks, Heiko; Al-Saud, Mohammed; Merz, Ralf; Kolditz, Olaf; Schüth, Christoph

    2017-05-01

    Due to limited availability of surface water, many arid to semi-arid countries rely on their groundwater resources. Despite the quasi-absence of present day replenishment, some of these groundwater bodies contain large amounts of water, which was recharged during pluvial periods of the Late Pleistocene to Early Holocene. These mostly fossil, non-renewable resources require different management schemes compared to those which are usually applied in renewable systems. Fossil groundwater is a finite resource and its withdrawal implies mining of aquifer storage reserves. Although they receive almost no recharge, some of them show notable hydraulic gradients and a flow towards their discharge areas, even without pumping. As a result, these systems have more discharge than recharge and hence are not in steady state, which makes their modelling, in particular the calibration, very challenging. In this study, we introduce a new calibration approach, composed of four steps: (i) estimating the fossil discharge component, (ii) determining the origin of fossil discharge, (iii) fitting the hydraulic conductivity with a pseudo steady-state model, and (iv) fitting the storage capacity with a transient model by reconstructing head drawdown induced by pumping activities. Finally, we test the relevance of our approach and evaluated the effect of considering or ignoring fossil gradients on aquifer parameterization for the Upper Mega Aquifer (UMA) on the Arabian Peninsula.

  16. Synchronizaton and causality across time-scales of observed and modelled ENSO dynamics

    Science.gov (United States)

    Jajcay, Nikola; Kravtsov, Sergey; Tsonis, Anastasios A.; Paluš, Milan

    2016-04-01

    Phase-phase and phase-amplitude interactions between dynamics on different temporal scales has been observed in ENSO dynamics, captured by the NINO3.4 index, using the approach for identification of cross-scale interactions introduced recently by Paluš [1]. The most pronounced interactions across scales are phase coherence and phase-phase causality in which the annual cycle influences the dynamics on the quasibiennial scale. The phase of slower phenomena on the scale 4-6 years influences not only the combination frequencies around the period one year, but also the phase of the annual cycle and also the amplitude of the oscillations in the quasibiennial range. In order to understand these nonlinear phenomena we investigate cross-scale interactions in synthetic, modelled NINO3.4 time series. The models taken into account were a selection of 96 historic runs from CMIP5 project, and two low-dimensional models - parametric recharge oscillator (PRO) [2], which is a two-dimensional dynamical model and a data-driven model based on the idea of linear inverse models [3]. The latter is a statistical model, in our setting 25-dimensional. While the two dimensions of the PRO model are not enough to capture all the cross-scale interactions, the results from the data-driven model are more promising and they resemble the interactions found in NINO3.4 measured data set. We believe that combination of models of different complexity will help to uncover mechanisms of the cross-scale interactions which might be the key for better understanding of the irregularities in the ENSO dynamics. This study is supported by the Ministry of Education, Youth and Sports of the Czech Republic within the Program KONTAKT II, Project No. LH14001. [1] M. Palus, Phys. Rev. Let. 112 078702 (2014) [2] K. Stein et al., J. Climate, 27, 14 (2014) [3] Kondrashov et al., J. Climate, 18, 21 (2005)

  17. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  18. Monitoring strategies and scale appropriate hydrologic and biogeochemical modelling for natural resource management

    DEFF Research Database (Denmark)

    Bende-Michl, Ulrike; Volk, Martin; Harmel, Daren

    2011-01-01

    This short communication paper presents recommendations for developing scale-appropriate monitoring and modelling strategies to assist decision making in natural resource management (NRM). These ideas presented here were discussed in the session (S5) ‘Monitoring strategies and scale......-appropriate hydrologic and biogeochemical modelling for natural resource management’ session at the 2008 International Environmental Modelling and Simulation Society conference, Barcelona, Spain. The outcomes of the session and recent international studies exemplify the need for a stronger collaboration...... and communication between researcher and model developer on the one side, and natural resource managers and the model users on the other side to increase knowledge in: 1) the limitations and uncertainties of current monitoring and modelling strategies, 2) scale-dependent linkages between monitoring and modelling...

  19. Core-scale solute transport model selection using Monte Carlo analysis

    CERN Document Server

    Malama, Bwalya; James, Scott C

    2013-01-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...

  20. A link between neuroscience and informatics: large-scale modeling of memory processes.

    Science.gov (United States)

    Horwitz, Barry; Smith, Jason F

    2008-04-01

    Utilizing advances in functional neuroimaging and computational neural modeling, neuroscientists have increasingly sought to investigate how distributed networks, composed of functionally defined subregions, combine to produce cognition. Large-scale, biologically realistic neural models, which integrate data from cellular, regional, whole brain, and behavioral sources, delineate specific hypotheses about how these interacting neural populations might carry out high-level cognitive tasks. In this review, we discuss neuroimaging, neural modeling, and the utility of large-scale biologically realistic models using modeling of short-term memory as an example. We present a sketch of the data regarding the neural basis of short-term memory from non-human electrophysiological, computational and neuroimaging perspectives, highlighting the multiple interacting brain regions believed to be involved. Through a review of several efforts, including our own, to combine neural modeling and neuroimaging data, we argue that large scale neural models provide specific advantages in understanding the distributed networks underlying cognition and behavior.

  1. A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation

    Science.gov (United States)

    Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.

    2016-12-01

    Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.

  2. Data constraints on global millennial scale geomagnetic field models

    Science.gov (United States)

    Korte, M. C.; Brown, M. C.; Frank, U.

    2012-12-01

    Global spherical harmonic geomagnetic field models are a powerful tool to investigate past field evolution and geodynamo processes. A number of such models covering the past 3 and 10 millennia have been developed over recent years, e.g., CALS3k.4. Resolution and reliability of this kind of inverse models depend crucially on the available data. The distribution of paleo- and archeomagnetic data which are available for global geomagnetic field reconstructions of past millennia is uneven and strongly biased towards the northern hemisphere and Europe in particular. Features seen in spherical harmonic field models in equatorial and southern hemisphere regions often rely strongly on information from only a few paleomagnetic sedimentary records. The quality of the paleomagnetic signal in such records varies widely depending on a number of factors, e.g., environmental conditions. Dating of records is another critical and difficult issue. We have recently obtained two new inclination and intensity records from Ethiopia, a region of previously sparse data coverage. Moreover, we are working on re-assessing some of the earliest published lake sediment records previously included in global field reconstructions, with an emphasis on improved age models. We use preliminary results of our work to study how modifications to the paleomagnetic data and its distribution in time and space influence CALS3k.4. We show this through two examples. 1) The inclusion of new paleomagnetic data from equatorial African sediments indicates a possible recurrence of a structure similar to the present-day intensity minimum known as the South Atlantic Anomaly. However, equatorial data from more westerly longitudes are necessary to define the temporal evolution of this feature. 2) Several sediment records from Asia to Australia produce a long-lasting undulation of the magnetic equator at the core-mantle boundary under southeast Asia. This is tied to the complex evolution of two flux patches under this

  3. Modelling maximal oxygen uptake in athletes: allometric scaling versus ratio-scaling in relation to body mass.

    Science.gov (United States)

    Chia, Michael; Aziz, Abdul Rashid

    2008-04-01

    Maximal oxygen uptake, V&O2 peak, among athletes is an important foundation for all training programmes to enhance competition performance. In Singapore, the V& O2 peak of athletes is apparently not widely known. There is also controversy in the modelling or scaling of maximal oxygen uptake for differences in body size - the use of ratio-scaling remains common but allometric scaling is gaining acceptance as the method of choice. One hundred fifty-eight male (age, 21.7 +/- 4.9 years; body mass, 64.8 +/- 8.6 kg) and 28 female (age, 21.9 +/- 7.0 years; body mass, 53.0 +/- 7.0 kg) athletes completed a maximal treadmill run to volitional exhaustion, to determine VO2 peak. V& O2 peak in L/min of female athletes was 67.8% that of male athletes (2.53 +/- 0.29 vs. 3.73 +/- 0.53 L/min), and V& O2 peak in mL/kg BM1.0/min of female athletes was 83.4% of male athletes (48.4 +/- 7.2 vs. 58.0 +/- 6.9 mL/kg BM1.0/min). Ratio-scaling of V& O2 peak did not create a size-free variable and was unsuitable as a scaling method. Instead, V& O2 peak, that was independent of the effect of body mass in male and female athletes, was best described using 2 separate and allometrically-derived sex-specific regression equations; these were V& O2 peak = 2.23 BM0.67 for male athletes and V& O2 peak = 2.23 BM0.24 for female athletes.

  4. Multi-Scale Modeling of Liquid Phase Sintering Affected by Gravity: Preliminary Analysis

    Science.gov (United States)

    Olevsky, Eugene; German, Randall M.

    2012-01-01

    A multi-scale simulation concept taking into account impact of gravity on liquid phase sintering is described. The gravity influence can be included at both the micro- and macro-scales. At the micro-scale, the diffusion mass-transport is directionally modified in the framework of kinetic Monte-Carlo simulations to include the impact of gravity. The micro-scale simulations can provide the values of the constitutive parameters for macroscopic sintering simulations. At the macro-scale, we are attempting to embed a continuum model of sintering into a finite-element framework that includes the gravity forces and substrate friction. If successful, the finite elements analysis will enable predictions relevant to space-based processing, including size and shape and property predictions. Model experiments are underway to support the models via extraction of viscosity moduli versus composition, particle size, heating rate, temperature and time.

  5. Multi-scale modeling of inter-granular fracture in UO2

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tonks, Michael R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, S. Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    A hierarchical multi-scale approach is pursued in this work to investigate the influence of porosity, pore and grain size on the intergranular brittle fracture in UO2. In this approach, molecular dynamics simulations are performed to obtain the fracture properties for different grain boundary types. A phase-field model is then utilized to perform intergranular fracture simulations of representative microstructures with different porosities, pore and grain sizes. In these simulations the grain boundary fracture properties obtained from molecular dynamics simulations are used. The responses from the phase-field fracture simulations are then fitted with a stress-based brittle fracture model usable at the engineering scale. This approach encapsulates three different length and time scales, and allows the development of microstructurally informed engineering scale model from properties evaluated at the atomistic scale.

  6. Application of a Subfilter-Scale Flux Model over the Ocean Using OHATS Field Data

    DEFF Research Database (Denmark)

    Kelly, Mark C.; Wyngaard, John C.; Sullivan, Peter P.

    2009-01-01

    the scalar flux model appeared to perform adequately over the ocean. Analysis of data from the Ocean Horizontal Array Turbulence Study (OHATS) reveals a need to account for the moving ocean–air interface in the subfilter stress model. The authors develop simple parameterizations for the effect of surface......Simple rate equation models for subfilter-scale scalar and momentum fluxes have previously been developed for application in the so-called “terra incognita” of atmospheric simulations, where the model resolution is comparable to the scale of turbulence. The models performed well over land, but only...

  7. Catchment scale modelling of pesticide fate and transport using a simple parsimonious process-based model

    Science.gov (United States)

    Pullan, Stephanie; Whelan, Mick; Holman, Ian

    2013-04-01

    Pesticides continue to be detected in surface water resources around the world. In the UK to ensure the safety of drinking water supplies, water companies are required to create drinking water safety plans, which take a catchment risk management approach. Models can be used to predict peak pesticide concentrations in raw surface water supplies, these predictions can then be utilised in risk assessments. There is therefore a need to model pesticide fate and transport from agricultural land to surface water resources at the catchment scale. We present a simple soil water balance model linked with a pesticide fate and transport model to predict hydrological response and pesticide exposures at the catchment outlet which is intended for use in risk assessment of raw drinking water resources. The model considers two soil water stores (a topsoil store and a subsoil store) for each soil type in the catchment. It employs a daily time-step and simulates changes in soil water content, actual evapotranspiration, overland flow, drainflow, lateral throughflow and potential recharge to a groundwater store which contributes to baseflow. The model is semi-lumped (not spatially explicit). Calculations are performed for soil type and crop combinations which are weighted by their proportion within the catchment. The model utilises soil properties from the national soil database and can, therefore, be applied to any catchment in England and Wales. The pesticide fate model assumes first-order degradation kinetics, a linear sorption isotherm and leaching at the rate of the unsaturated hydraulic conductivity. Following application the pesticide is assumed to diffuse into the soil and be evenly distributed in the "non-excluded" pore water (pesticides are assumed to be unable to diffuse into the very small pores). Pesticide concentrations and loads to surface water resources are calculated for rainfall events that generate a hydrological response, assuming that a proportion of the most

  8. Identification of fine scale and landscape scale drivers of urban aboveground carbon stocks using high-resolution modeling and mapping.

    Science.gov (United States)

    Mitchell, Matthew G E; Johansen, Kasper; Maron, Martine; McAlpine, Clive A; Wu, Dan; Rhodes, Jonathan R

    2018-05-01

    Urban areas are sources of land use change and CO 2 emissions that contribute to global climate change. Despite this, assessments of urban vegetation carbon stocks often fail to identify important landscape-scale drivers of variation in urban carbon, especially the potential effects of landscape structure variables at different spatial scales. We combined field measurements with Light Detection And Ranging (LiDAR) data to build high-resolution models of woody plant aboveground carbon across the urban portion of Brisbane, Australia, and then identified landscape scale drivers of these carbon stocks. First, we used LiDAR data to quantify the extent and vertical structure of vegetation across the city at high resolution (5×5m). Next, we paired this data with aboveground carbon measurements at 219 sites to create boosted regression tree models and map aboveground carbon across the city. We then used these maps to determine how spatial variation in land cover/land use and landscape structure affects these carbon stocks. Foliage densities above 5m height, tree canopy height, and the presence of ground openings had the strongest relationships with aboveground carbon. Using these fine-scale relationships, we estimate that 2.2±0.4 TgC are stored aboveground in the urban portion of Brisbane, with mean densities of 32.6±5.8MgCha -1 calculated across the entire urban land area, and 110.9±19.7MgCha -1 calculated within treed areas. Predicted carbon densities within treed areas showed strong positive relationships with the proportion of surrounding tree cover and how clumped that tree cover was at both 1km 2 and 1ha resolutions. Our models predict that even dense urban areas with low tree cover can have high carbon densities at fine scales. We conclude that actions and policies aimed at increasing urban carbon should focus on those areas where urban tree cover is most fragmented. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Modeling Global-Scale Craters in Rubble Piles

    Science.gov (United States)

    Gabriel, Travis; Korycansky, D. G.; Asphaug, E.

    2012-10-01

    We model large craters in rubble piles by initiating a velocity field according to the Maxwell Z-Model in a simulated rubble pile. Open Dynamics Engine (ODE, www.ode.org), as used in similar studies (Korycansky and Asphaug 2009, Korycansky and Plesko 2011), is used here to simulate icosahedra of varied dimensions bound by self-gravity and friction. ODE employs sophisticated collision detection and constraint-force solvers in addition to solving equations of rigid body motion. The engine has been benchmarked in situations where the solutions are readily available through laboratory studies or analytical calculation such as the angle of repose of polyhedra in an open-walled box and a rectangular box sliding on an inclined plane (Korycansky and Asphaug 2010 LPSC). Using solutions of the Z-Model as an initial condition for crater excavation greatly reduces the number of parameters to study. Dynamical evolution of the velocity field is followed to several gravitational timescales. Our simple study, a precursor to using explicit SPH-derived flow fields, does not consider the time dependent nature of the radial velocity flow strength term, α(t), nor the Z-Model’s velocity field contribution beyond t=0. A constant value of Z=2.7 is used. The simulation serves as a laboratory for investigating the flow strength term in observed craters such as the Stickney crater on Phobos, and as a testbed for exploring the physical effects of shear bulking and other aspects of granular flow in asteroid and comet collisional evolution.

  10. Final Report for Enhancing the MPI Programming Model for PetaScale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, William Douglas [University of Illinois at Urbana-Champaign

    2013-07-22

    This project performed research into enhancing the MPI programming model in two ways: developing improved algorithms and implementation strategies, tested and realized in the MPICH implementation, and exploring extensions to the MPI standard to better support PetaScale and ExaScale systems.

  11. Scaling-up spatially-explicit ecological models using graphics processors

    NARCIS (Netherlands)

    Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis

    2011-01-01

    How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to

  12. A Structural Equation Modelling of the Academic Self-Concept Scale

    Science.gov (United States)

    Matovu, Musa

    2014-01-01

    The study aimed at validating the academic self-concept scale by Liu and Wang (2005) in measuring academic self-concept among university students. Structural equation modelling was used to validate the scale which was composed of two subscales; academic confidence and academic effort. The study was conducted on university students; males and…

  13. Efficient model predictive control for large-scale urban traffic networks

    NARCIS (Netherlands)

    Lin, S.

    2011-01-01

    Model Predictive Control is applied to control and coordinate large-scale urban traffic networks. However, due to the large scale or the nonlinear, non-convex nature of the on-line optimization problems solved, the MPC controllers become real-time infeasible in practice, even though the problem is

  14. Finite-size scaling of interface free energies in the 3d Ising model

    CERN Document Server

    Pepé, M; Forcrand, Ph. de

    2002-01-01

    We perform a study of the universality of the finite size scaling functions of interface free energies in the 3d Ising model. Close to the hot/cold phase transition, we observe very good agreement with the same scaling functions of the 4d SU(2) Yang--Mills theory at the deconfinement phase transition.

  15. Multi-scale modeling with cellular automata: The complex automata approach

    NARCIS (Netherlands)

    Hoekstra, A.G.; Falcone, J.-L.; Caiazzo, A.; Chopard, B.

    2008-01-01

    Cellular Automata are commonly used to describe complex natural phenomena. In many cases it is required to capture the multi-scale nature of these phenomena. A single Cellular Automata model may not be able to efficiently simulate a wide range of spatial and temporal scales. It is our goal to

  16. A Fuel-Sensitive Reduced-Order Model (ROM) for Piston Engine Scaling Analysis

    Science.gov (United States)

    2017-09-29

    single-cylinder moving piston case near top dead center at diesel - engine conditions. The ROM provides a real-time engineering analytical tool for liquid...length scaling that may be used toward optimizing engine performance . 15. SUBJECT TERMS reduced-order model, ROM, engine scaling, spray... diesel engine ................................... 20 Approved for public release; distribution is unlimited. 1 1. Introduction A central

  17. Multi-scale climate modelling over Southern Africa using a variable ...

    African Journals Online (AJOL)

    Evidence is provided of the successful application of a single atmospheric model code at time scales ranging from short-range weather forecasting through to projections of future climate change, and at spatial scales that vary from relatively low-resolution global simulations, to ultra-high resolution simulations at the ...

  18. Validation Of Naval Platform Electromagnetic Tools Via Model And Full-Scale Measurements

    NARCIS (Netherlands)

    van der Graaff, Jasper; Leferink, Frank Bernardus Johannes

    2004-01-01

    Reliable EMC predictions are very important in the design of a naval platform's topside. Currently, EMC predictions of a Navy ship are verified by scale model and full-scale measurements. In the near future, the validation of software tools leads to an increased confidence in EMC predictions and

  19. Erosion and sedimentation models in New Zealand: spanning scales, processes and environments

    Science.gov (United States)

    Elliott, Sandy; Oehler, Francois; Derose, Ron

    2010-05-01

    Erosion and sedimentation are of keen interest in New Zealand due to pasture loss in hill areas, damage to infrastructure, loss of stream conveyance, and ecological impacts in estuarine and coastal areas. Management of these impacts requires prediction of the rates, locations, and timing of erosion and transport across a range of scales, and prediction of the response to intervention measures. A range of models has been applied in New Zealand to address these requirements, including: empirical models for the location and probability of occurrence of shallow landslides; empirical national-scale sediment load models with spatial and temporal downscaling; dynamic field-scale sheet erosion models upscaled and linked to estuarine deposition models, including assessment of climate change and effects of urbanisation; detailed (20 m) physically-based distributed dynamic catchment models applied to catchment scale; and provision of GIS-based decision support tools. Despite these advances, considerable work is required to provide the right information at the right scale. Remaining issues are linking between control measures described at the scale of implementation (part of hillslopes, reaches) to catchment-scale outcomes, which entails fine spatial resolution and large computational demands; ability to predict some key processes such as bank and head gully erosion; representation of sediment remobilisation of stores associated with response to land clearance; ability to represent episodic or catastrophic erosion processes along with relatively continuous processes such as sheet flow in a single model; and prediction of sediment concentrations and clarity under normal flow conditions. In this presentation we describe a variety of models and their application in New Zealand, summarise the models in terms of scales, complexity and uses, and outline approaches to resolving the remaining difficulties.

  20. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    2012-01-01

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  1. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Science.gov (United States)

    Mann, Richard P; Perna, Andrea; Strömbom, Daniel; Garnett, Roman; Herbert-Read, James E; Sumpter, David J T; Ward, Ashley J W

    2012-01-01

    Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis). We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects. © 2012 Mann et al.

  2. Analytical model of the statistical properties of contrast of large-scale ionospheric inhomogeneities.

    Science.gov (United States)

    Vsekhsvyatskaya, I. S.; Evstratova, E. A.; Kalinin, Yu. K.; Romanchuk, A. A.

    1989-08-01

    A new analytical model is proposed for the distribution of variations of the relative electron-density contrast of large-scale ionospheric inhomogeneities. The model is characterized by other-than-zero skewness and kurtosis. It is shown that the model is applicable in the interval of horizontal dimensions of inhomogeneities from hundreds to thousands of kilometers.

  3. Advancing Ecological Models to Compare Scale in Multi-Level Educational Change

    Science.gov (United States)

    Woo, David James

    2016-01-01

    Education systems as units of analysis have been metaphorically likened to ecologies to model change. However, ecological models to date have been ineffective in modelling educational change that is multi-scale and occurs across multiple levels of an education system. Thus, this paper advances two innovative, ecological frameworks that improve on…

  4. CFD MODELING OF FINE SCALE FLOW AND TRANSPORT IN THE HOUSTON METROPOLITAN AREA, TEXAS

    Science.gov (United States)

    Fine scale modeling of flows and air quality in Houston, Texas is being performed; the use of computational fluid dynamics (CFD) modeling is being applied to investigate the influence of morphologic structures on the within-grid transport and dispersion of sources in grid models ...

  5. MultiMetEval : Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models

    NARCIS (Netherlands)

    Zakrzewski, Piotr; Medema, Marnix H.; Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko; Fong, Stephen S.

    2012-01-01

    Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the

  6. Multi-scale friction modeling for sheet metal forming: the boundary lubrication regime

    NARCIS (Netherlands)

    Hol, J.D.; Meinders, Vincent T.; de Rooij, Matthias B.; van den Boogaard, Antonius H.

    2015-01-01

    A physical based friction model is presented to describe friction in full-scale forming simulations. The advanced friction model accounts for the change in surface topography and the evolution of friction in the boundary lubrication regime. The implementation of the friction model in FE software

  7. TIGER: Toolbox for integrating genome-scale metabolic models, expression data, and transcriptional regulatory networks

    Directory of Open Access Journals (Sweden)

    Jensen Paul A

    2011-09-01

    Full Text Available Abstract Background Several methods have been developed for analyzing genome-scale models of metabolism and transcriptional regulation. Many of these methods, such as Flux Balance Analysis, use constrained optimization to predict relationships between metabolic flux and the genes that encode and regulate enzyme activity. Recently, mixed integer programming has been used to encode these gene-protein-reaction (GPR relationships into a single optimization problem, but these techniques are often of limited generality and lack a tool for automating the conversion of rules to a coupled regulatory/metabolic model. Results We present TIGER, a Toolbox for Integrating Genome-scale Metabolism, Expression, and Regulation. TIGER converts a series of generalized, Boolean or multilevel rules into a set of mixed integer inequalities. The package also includes implementations of existing algorithms to integrate high-throughput expression data with genome-scale models of metabolism and transcriptional regulation. We demonstrate how TIGER automates the coupling of a genome-scale metabolic model with GPR logic and models of transcriptional regulation, thereby serving as a platform for algorithm development and large-scale metabolic analysis. Additionally, we demonstrate how TIGER's algorithms can be used to identify inconsistencies and improve existing models of transcriptional regulation with examples from the reconstructed transcriptional regulatory network of Saccharomyces cerevisiae. Conclusion The TIGER package provides a consistent platform for algorithm development and extending existing genome-scale metabolic models with regulatory networks and high-throughput data.

  8. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  9. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  10. Model Scaling Approach for the GOCE End to End Simulator

    Science.gov (United States)

    Catastini, G.; De Sanctis, S.; Dumontel, M.; Parisch, M.

    2007-08-01

    The Gravity field and steady-state Ocean Circulation Explorer (GOCE) is the first core Earth explorer of ESA's Earth observation programme of satellites for research in the Earth sciences. The objective of the mission is to produce high-accuracy, high-resolution, global measurements of the Earth's gravity field, leading to improved geopotential and geoid (the equipotential surface corresponding to the steady-state sea level) models for use in a wide range of geophysical applications. More precisely, the GOCE mission is designed to provide a global reconstruction of the geo- potential model and geoid with high spatial resolution (better than 0.1 cm at the degree and order l = 50 and better than 1.0 cm at degree and order l = 200). Such scientific performance scenario requires at least the computation of 200 harmonics of the gravitational field and a simulated time span covering a minimum of 60 days (corresponding to a full coverage of the Earth surface). Thales Alenia Space Italia (TAS-I) is responsible, as Prime Contractor, of the GOCE Satellite. The GOCE mission objective is the high-accuracy retrieval of the Earth gravity field. The idea of an End-to-End simulator (E2E) was conceived in the early stages of the GOCE programme, as an essential tool for supporting the design and verification activities as well as for assessing the satellite system performance. The simulator in its present form has been developed at TAS-I for ESA since the beginning of Phase B and is currently used for: checking the consistency of spacecraft and payload specifications with the overall system requirements supporting trade-off, sensitivity and worst-case analyses supporting design and pre-validation testing of the Drag-Free and Attitude Control (DFAC) laws preparing and testing the on-ground and in-flight gradiometer calibration concepts prototyping the post-processing algorithms, transforming the scientific data from Level 0 (raw telemetry format) to Level 1B (i.e. geo-located gravity

  11. Land-Atmosphere Coupling in the Multi-Scale Modelling Framework

    Science.gov (United States)

    Kraus, P. M.; Denning, S.

    2015-12-01

    The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced

  12. Nano-scaled semiconductor devices physics, modelling, characterisation, and societal impact

    CERN Document Server

    Gutiérrez-D, Edmundo A

    2016-01-01

    This book describes methods for the characterisation, modelling, and simulation prediction of these second order effects in order to optimise performance, energy efficiency and new uses of nano-scaled semiconductor devices.

  13. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  14. Submillimeter-Wave Polarimetric Compact Ranges for Scale-Model Radar Measurements

    National Research Council Canada - National Science Library

    Coulombe, Michael J; Waldman, Jerry; Giles, R. H; Gatesman, Andrew J; Goyette, Thomas M; Nixon, William

    2002-01-01

    .... A dielectric material fabrication and characterization capability has also been developed to fabricate custom anechoic materials for the ranges as well as scaled dielectric parts for the models and clutter scenes...

  15. A 160 GHZ Polarimetric Compact Range for Scale Model RCS Measurements

    National Research Council Canada - National Science Library

    Coulombe, Michael J; Horgan, T; Waldman, Jerry; Neilson, J; Carter, S; Nixon, William

    1996-01-01

    ...:16th scale-model targets. The transceiver consists of a fast switching, stepped, continuous wave, X-band synthesizer driving dual X16 transmit multiplier chains and dual X16 local oscillator multiplier chains...

  16. Scaled Model Technology for Flight Research of General Aviation Aircraft Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Our proposed future Phase II activities are aimed at developing a scientifically based "tool box" for flight research using scaled models. These tools will be of...

  17. Fractal Scaling Models of Natural Oscillations in Chain Systems and the Mass Distribution of Particles

    Directory of Open Access Journals (Sweden)

    Müller H.

    2010-04-01

    Full Text Available The paper presents a fractal scaling model of a chain system of quantum harmonic oscillators, that reproduces some systematic features in the mass distribution of hadrons, leptons and gauge bosons.

  18. Dual-time scale crystal plasticity FE model for cyclic deformation of Ti alloys

    Science.gov (United States)

    Manchiraju, Sivom; Kirane, Kedar; Ghosh, Somnath

    2007-12-01

    A dual-time scale finite element model is developed in this paper for simulating cyclic deformation in a Titanium alloy Ti-6242. The material is characterized by crystal plasticity constitutive relations. Modeling cyclic deformation using conventional time integration algorithms in a single time scale can be prohibitive for crystal plasticity computations. Typically 3D crystal plasticity based fatigue simulations found in the literature are in the range of 100 cycles. Results are subsequently extrapolated to thousands of cycles, which can lead to considerable error in fatigue predictions. However, the dual-time scale model enables simulations up to a significantly high number of cycles to reach local states of damage initiation leading to fatigue crack growth. This formulation decomposes the governing equations into two sets of problems, corresponding to a coarse time scale (low frequency) cycle-averaged problem and a fine time scale (high frequency) oscillatory problem. A statistically equivalent 3D polycrystalline model of Ti-6242 is simulated by the crystal plasticity finite element model to study the evolution of local stresses and strains in the microstructure with cyclic loading. The comparison with the single time scale reference solution shows excellent accuracy while the efficiency gained through time-scale compression can be enormous.

  19. Application of Scaling-Law and CFD Modeling to Hydrodynamics of Circulating Biomass Fluidized Bed Gasifier

    Directory of Open Access Journals (Sweden)

    Mazda Biglari

    2016-06-01

    Full Text Available Two modeling approaches, the scaling-law and CFD (Computational Fluid Dynamics approaches, are presented in this paper. To save on experimental cost of the pilot plant, the scaling-law approach as a low-computational-cost method was adopted and a small scale column operating under ambient temperature and pressure was built. A series of laboratory tests and computer simulations were carried out to evaluate the hydrodynamic characteristics of a pilot fluidized-bed biomass gasifier. In the small scale column solids were fluidized. The pressure and other hydrodynamic properties were monitored for the validation of the scaling-law application. In addition to the scaling-law modeling method, the CFD approach was presented to simulate the gas-particle system in the small column. 2D CFD models were developed to simulate the hydrodynamic regime. The simulation results were validated with the experimental data from the small column. It was proved that the CFD model was able to accurately predict the hydrodynamics of the small column. The outcomes of this research present both the scaling law with the lower computational cost and the CFD modeling as a more robust method to suit various needs for the design of fluidized-bed gasifiers.

  20. Drive Rig Mufflers for Model Scale Engine Acoustic Testing

    Science.gov (United States)

    Stephens, David

    2010-01-01

    Testing of air breathing propulsion systems in the 9x15 foot wind tunnel at NASA Glenn Research Center depends on compressed air turbines for power. The drive rig turbines exhaust directly to the wind tunnel test section, and have been found to produce significant unwanted noise that reduces the quality of the acoustic measurements of the model being tested. In order to mitigate this acoustic contamination, a muffler can be attached downstream of the drive rig turbine. The modern engine designs currently being tested produce much less noise than traditional engines, and consequently a lower noise floor is required of the facility. An acoustic test of a muffler designed to mitigate this extraneous noise is presented, and a noise reduction of 8 dB between 700 Hz and 20 kHz was documented, significantly improving the quality of acoustic measurements in the facility.

  1. Regional-Scale Climate Change: Observations and Model Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, Raymond S; Diaz, Henry F

    2010-12-14

    This collaborative proposal addressed key issues in understanding the Earth's climate system, as highlighted by the U.S. Climate Science Program. The research focused on documenting past climatic changes and on assessing future climatic changes based on suites of global and regional climate models. Geographically, our emphasis was on the mountainous regions of the world, with a particular focus on the Neotropics of Central America and the Hawaiian Islands. Mountain regions are zones where large variations in ecosystems occur due to the strong climate zonation forced by the topography. These areas are particularly susceptible to changes in critical ecological thresholds, and we conducted studies of changes in phonological indicators based on various climatic thresholds.

  2. Numerical modelling of wave current interactions at a local scale

    Science.gov (United States)

    Teles, Maria João; Pires-Silva, António A.; Benoit, Michel

    2013-08-01

    The present work is focused on the evaluation of wave-current interactions through numerical simulations of combined wave and current flows with the Code_Saturne (Archambeau et al., 2004), an advanced CFD solver based on the RANS (Reynolds Averaged Navier-Stokes) equations. The objectives of this paper are twofold. Firstly, changes in the mean horizontal velocity and the horizontal-velocity amplitude profiles are studied when waves are superposed on currents. The influence of various first and second order turbulence closure models is addressed. The results of the numerical simulations are compared to the experimental data of Klopman (1994) and Umeyama (2005). Secondly, a more detailed study of the shear stresses and the turbulence viscosity vertical profile changes is also pursued when waves and currents interact. This analysis is completed using the data from Umeyama (2005). A relationship between a non-dimensional parameter involving the turbulence viscosity and the Ursell number is subsequently proposed.

  3. Predictive spatial modelling for mapping soil salinity at continental scale

    Science.gov (United States)

    Bui, Elisabeth; Wilford, John; de Caritat, Patrice

    2017-04-01

    Soil salinity is a serious limitation to agriculture and one of the main causes of land degradation. Soil is considered saline if its electrical conductivity (EC) is > 4 dS/m. Maps of saline soil distribution are essential for appropriate land development. Previous attempts to map soil salinity over extensive areas have relied on satellite imagery, aerial electromagnetic (EM) and/or proximally sensed EM data; other environmental (climate, topographic, geologic or soil) datasets are generally not used. Having successfully modelled and mapped calcium carbonate distribution over the 0-80 cm depth in Australian soils using machine learning with point samples from the National Geochemical Survey of Australia (NGSA), we took a similar approach to map soil salinity at 90-m resolution over the continent. The input data were the EC1:5 measurements on the learning software 'Cubist' (www.rulequest.com) was used as the inference engine for the modelling, a 90:10 training:test set data split was used to validate results, and 100 randomly sampled trees were built using the training data. The results were good with an average internal correlation (r) of 0.88 between predicted and measured logEC1:5 (training data), an average external correlation of 0.48 (test subset), and a Lin's concordance correlation coefficient (which evaluates the 1:1 fit) of 0.61. Therefore, the rules derived were mapped and the mean prediction for each 90-m pixel was used for the final logEC1:5 map. This is the most detailed picture of soil salinity over Australia since the 2001 National Land and Water Resources Audit and is generally consistent with it. Our map will be useful as a baseline salinity map circa 2008, when the NGSA samples were collected, for future State of the Environment reports.

  4. Spatial connections in regional climate model rainfall outputs at different temporal scales: Application of network theory

    Science.gov (United States)

    Naufan, Ihsan; Sivakumar, Bellie; Woldemeskel, Fitsum M.; Raghavan, Srivatsan V.; Vu, Minh Tue; Liong, Shie-Yui

    2018-01-01

    Understanding the spatial and temporal variability of rainfall has always been a great challenge, and the impacts of climate change further complicate this issue. The present study employs the concepts of complex networks to study the spatial connections in rainfall, with emphasis on climate change and rainfall scaling. Rainfall outputs (during 1961-1990) from a regional climate model (i.e. Weather Research and Forecasting (WRF) model that downscaled the European Centre for Medium-range Weather Forecasts, ECMWF ERA-40 reanalyses) over Southeast Asia are studied, and data corresponding to eight different temporal scales (6-hr, 12-hr, daily, 2-day, 4-day, weekly, biweekly, and monthly) are analyzed. Two network-based methods are applied to examine the connections in rainfall: clustering coefficient (a measure of the network's local density) and degree distribution (a measure of the network's spread). The influence of rainfall correlation threshold (T) on spatial connections is also investigated by considering seven different threshold levels (ranging from 0.5 to 0.8). The results indicate that: (1) rainfall networks corresponding to much coarser temporal scales exhibit properties similar to that of small-world networks, regardless of the threshold; (2) rainfall networks corresponding to much finer temporal scales may be classified as either small-world networks or scale-free networks, depending upon the threshold; and (3) rainfall spatial connections exhibit a transition phase at intermediate temporal scales, especially at high thresholds. These results suggest that the most appropriate model for studying spatial connections may often be different at different temporal scales, and that a combination of small-world and scale-free network models might be more appropriate for rainfall upscaling/downscaling across all scales, in the strict sense of scale-invariance. The results also suggest that spatial connections in the studied rainfall networks in Southeast Asia are

  5. Thermodynamic modeling of small scale biomass gasifiers: Development and assessment of the ''Multi-Box'' approach.

    Science.gov (United States)

    Vakalis, Stergios; Patuzzi, Francesco; Baratieri, Marco

    2016-04-01

    Modeling can be a powerful tool for designing and optimizing gasification systems. Modeling applications for small scale/fixed bed biomass gasifiers have been interesting due to their increased commercial practices. Fixed bed gasifiers are characterized by a wide range of operational conditions and are multi-zoned processes. The reactants are distributed in different phases and the products from each zone influence the following process steps and thus the composition of the final products. The present study aims to improve the conventional 'Black-Box' thermodynamic modeling by means of developing multiple intermediate 'boxes' that calculate two phase (solid-vapor) equilibriums in small scale gasifiers. Therefore the model is named ''Multi-Box''. Experimental data from a small scale gasifier have been used for the validation of the model. The returned results are significantly closer with the actual case study measurements in comparison to single-stage thermodynamic modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Bridging scales through multiscale modeling: a case study on protein kinase A

    OpenAIRE

    Britton W Boras; Sophia P Hirakis; Votapka, Lane W; Malmstrom, Robert D.; Amaro, Rommie E.; McCulloch, Andrew D.

    2015-01-01

    The goal of multiscale modeling in biology is to use structurally based physico-chemical models to integrate across temporal and spatial scales of biology and thereby improve mechanistic understanding of, for example, how a single mutation can alter organism-scale phenotypes. This approach may also inform therapeutic strategies or identify candidate drug targets that might otherwise have been overlooked. However, in many cases, it remains unclear how best to synthesize information obtained fr...

  7. Bridging scales through multiscale modeling: A case study on Protein Kinase A

    OpenAIRE

    Sophia P Hirakis; Britton Warren Boras; Votapka, Lane W; Robert Dean Malmstrom; McCulloch, Andrew D.; Amaro, Rommie E.

    2015-01-01

    The goal of multiscale modeling in biology is to use structurally based physico-chemical models to integrate across temporal and spatial scales of biology and thereby improve mechanistic understanding of, for example, how a single mutation can alter organism-scale phenotypes. This approach may also inform therapeutic strategies or identify candidate drug targets that might otherwise have been overlooked. However, in many cases, it remains unclear how best to synthesize information obtained fr...

  8. Ares I Scale Model Acoustic Test Above Deck Water Sound Suppression Results

    Science.gov (United States)

    Counter, Douglas D.; Houston, Janice D.

    2011-01-01

    The Ares I Scale Model Acoustic Test (ASMAT) program test matrix was designed to determine the acoustic reduction for the Liftoff acoustics (LOA) environment with an above deck water sound suppression system. The scale model test can be used to quantify the effectiveness of the water suppression system as well as optimize the systems necessary for the LOA noise reduction. Several water flow rates were tested to determine which rate provides the greatest acoustic reductions. Preliminary results are presented.

  9. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to

  10. Upscaling of Long-Term U9VI) Desorption from Pore Scale Kinetics to Field-Scale Reactive Transport Models

    Energy Technology Data Exchange (ETDEWEB)

    Andy Miller

    2009-01-25

    Environmental systems exhibit a range of complexities which exist at a range of length and mass scales. Within the realm of radionuclide fate and transport, much work has been focused on understanding pore scale processes where complexity can be reduced to a simplified system. In describing larger scale behavior, the results from these simplified systems must be combined to create a theory of the whole. This process can be quite complex, and lead to models which lack transparency. The underlying assumption of this approach is that complex systems will exhibit complex behavior, requiring a complex system of equations to describe behavior. This assumption has never been tested. The goal of the experiments presented is to ask the question: Do increasingly complex systems show increasingly complex behavior? Three experimental tanks at the intermediate scale (Tank 1: 2.4m x 1.2m x 7.6cm, Tank 2: 2.4m x 0.61m x 7.6cm, Tank 3: 2.4m x 0.61m x 0.61m (LxHxW)) have been completed. These tanks were packed with various physical orientations of different particle sizes of a uranium contaminated sediment from a former uranium mill near Naturita, Colorado. Steady state water flow was induced across the tanks using constant head boundaries. Pore water was removed from within the flow domain through sampling ports/wells; effluent samples were also taken. Each sample was analyzed for a variety of analytes relating to the solubility and transport of uranium. Flow fields were characterized using inert tracers and direct measurements of pressure head. The results show that although there is a wide range of chemical variability within the flow domain of the tank, the effluent uranium behavior is simple enough to be described using a variety of conceptual models. Thus, although there is a wide range in variability caused by pore scale behaviors, these behaviors appear to be smoothed out as uranium is transported through the tank. This smoothing of uranium transport behavior transcends

  11. Modeling Brain Circuitry over a Wide Range of Scales

    Directory of Open Access Journals (Sweden)

    Pascal eFua

    2015-04-01

    Full Text Available If we are ever to unravel the mysteries of brain function at its most fundamental level, we will need a precise understanding of how its component neurons connect to each other. Electron Microscopes (EM can now provide the nanometer resolution that is needed to image synapses, and therefore connections, while Light Microscopes (LM see at the micrometer resolution required to model the 3D structure of the dendritic network. Since both the topology and the connection strength are integral parts of the brain's wiring diagram, being able to combine these two modalities is critically important.In fact, these microscopes now routinely produce high-resolution imagery in such large quantities that the bottleneck becomes automated processing and interpretation, which is needed for such data to be exploited to its full potential. In this paper, we briefly review the Computer Vision techniques we have developed at EPFL to address this need. They include delineating dendritic arbors from LM imagery, segmenting organelles from EM, and combining the two into a consistent representation.

  12. INCAS 2.5D mid-scale model

    Directory of Open Access Journals (Sweden)

    Adrian DOBRE

    2010-09-01

    Full Text Available In the design of the wing airfoils for transport aircraft, it is necessary to meet different requirements for distinct phases of flight, namely the cruise flight on one side and the take-off and landing on the other side. The disagreement between the requirements of the cruise flight and those of landing and especially of take-off can be solved by using high-lift systems as particular profiles at a certain offset of the main wing.Basically, high-lift configurations consisting of several individual elements can provide the best lift coefficient. Yet, such complex systems, when compatible with the cruise profile, produce a large increase in the weight of the wing. In this respect the number of devices is not larger than five in practice. In the last years the efforts in high-lift aerodynamics have targeted to reach similar lift coefficients for less complex systems. In the meantime for transport aircraft of all sizes the state of the art is to use only a flap and a slat as high lift devices. The high-lift model used within this project was designed and optimized as a three element configuration.

  13. Modeling Small Scale Solar Powered ORC Unit for Standalone Application

    Directory of Open Access Journals (Sweden)

    Enrico Bocci

    2012-01-01

    Full Text Available When the electricity from the grid is not available, the generation of electricity in remote areas is an essential challenge to satisfy important needs. In many developing countries the power generation from Diesel engines is the applied technical solution. However the cost and supply of fuel make a strong dependency of the communities on the external support. Alternatives to fuel combustion can be found in photovoltaic generators, and, with suitable conditions, small wind turbines or microhydroplants. The aim of the paper is to simulate the power generation of a generating unit using the Rankine Cycle and using refrigerant R245fa as a working fluid. The generation unit has thermal solar panels as heat source and photovoltaic modules for the needs of the auxiliary items (pumps, electronics, etc.. The paper illustrates the modeling of the system using TRNSYS platform, highlighting standard and “ad hoc” developed components as well as the global system efficiency. In the future the results of the simulation will be compared with the data collected from the 3 kW prototype under construction in the Tuscia University in Italy.

  14. Allostery without conformation change: modelling protein dynamics at multiple scales

    Science.gov (United States)

    McLeish, T. C. B.; Rodgers, T. L.; Wilson, M. R.

    2013-10-01

    The original ideas of Cooper and Dryden, that allosteric signalling can be induced between distant binding sites on proteins without any change in mean structural conformation, has proved to be a remarkably prescient insight into the rich structure of protein dynamics. It represents an alternative to the celebrated Monod-Wyman-Changeux mechanism and proposes that modulation of the amplitude of thermal fluctuations around a mean structure, rather than shifts in the structure itself, give rise to allostery in ligand binding. In a complementary approach to experiments on real proteins, here we take a theoretical route to identify the necessary structural components of this mechanism. By reviewing and extending an approach that moves from very coarse-grained to more detailed models, we show that, a fundamental requirement for a body supporting fluctuation-induced allostery is a strongly inhomogeneous elastic modulus. This requirement is reflected in many real proteins, where a good approximation of the elastic structure maps strongly coherent domains onto rigid blocks connected by more flexible interface regions.

  15. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  16. Multi-scale modelling of bioreactor–separator system for wastewater treatment with two-dimensional activated sludge floc dynamics

    NARCIS (Netherlands)

    Ofi?eru, I.D.; Bellucci, M.; Picioreanu, C.; Lavric, V.; Curtis, T.P.

    2013-01-01

    A simple “first generation” multi-scale computational model of the formation of activated sludge flocs at micro-scale and reactor performance at macro-scale is proposed. The model couples mass balances for substrates and biomass at reactor scale with an individual-based approach for the floc

  17. Multi-scale Eulerian model within the new National Environmental Modeling System

    Science.gov (United States)

    Janjic, Zavisa; Janjic, Tijana; Vasic, Ratko

    2010-05-01

    The unified Non-hydrostatic Multi-scale Model on the Arakawa B grid (NMMB) is being developed at NCEP within the National Environmental Modeling System (NEMS). The finite-volume horizontal differencing employed in the model preserves important properties of differential operators and conserves a variety of basic and derived dynamical and quadratic quantities. Among these, conservation of energy and enstrophy improves the accuracy of nonlinear dynamics of the model. Within further model development, advection schemes of fourth order of formal accuracy have been developed. It is argued that higher order advection schemes should not be used in the thermodynamic equation in order to preserve consistency with the second order scheme used for computation of the pressure gradient force. Thus, the fourth order scheme is applied only to momentum advection. Three sophisticated second order schemes were considered for upgrade. Two of them, proposed in Janjic(1984), conserve energy and enstrophy, but with enstrophy calculated differently. One of them conserves enstrophy as computed by the most accurate second order Laplacian operating on stream function. The other scheme conserves enstrophy as computed from the B grid velocity. The third scheme (Arakawa 1972) is arithmetic mean of the former two. It does not conserve enstrophy strictly, but it conserves other quadratic quantities that control the nonlinear energy cascade. Linearization of all three schemes leads to the same second order linear advection scheme. The second order term of the truncation error of the linear advection scheme has a special form so that it can be eliminated by simply preconditioning the advected quantity. Tests with linear advection of a cone confirm the advantage of the fourth order scheme. However, if a localized, large amplitude and high wave-number pattern is present in initial conditions, the clear advantage of the fourth order scheme disappears. In real data runs, problems with noisy data may

  18. The Sum of the Parts: Large-scale Modeling in Systems Biology

    DEFF Research Database (Denmark)

    Fridolin, Gross; Green, Sara

    2017-01-01

    biology provides novel ways to ​ ​recompose these findings in the context of the system as a whole via computational simulations. As an example of computational integration of modules, we analyze the first whole-cell model of the bacterium ​ ​M. genitalium. Secondly, we examine the attempt to recompose...... processes across different spatial scales via multi-scale cardiac models. Although these models also rely on a number of idealizations and simplifying assumptions, we argue that they provide insight into the limitations of reductionist approaches. Whole-cell models can be used to discover properties arising...

  19. LES Modeling of Lateral Dispersion in the Ocean on Scales of 10 m to 10 km

    Science.gov (United States)

    2015-10-20

    m to 10 km 5a. CONTRACT NUMBER N00014-10-C-0080 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) M. -Pascale Lelong...ocean on scales of 0.1-10 km that can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local...Distribution approved for public release; distribution is unlimited. Final Report LES Modeling of Lateral Dispersion on Scales of 10 m to 10 km M.-Pascale

  20. Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate Scale Water Quality Model

    Energy Technology Data Exchange (ETDEWEB)

    Khangaonkar, Tarang; Sackmann, Brandon S.; Long, Wen; Mohamedali, Teizeen; Roberts, Mindy

    2012-10-01

    The Salish Sea, including Puget Sound, is a large estuarine system bounded by over seven thousand miles of complex shorelines, consists of several subbasins and many large inlets with distinct properties of their own. Pacific Ocean water enters Puget Sound through the Strait of Juan de Fuca at depth over the Admiralty Inlet sill. Ocean water mixed with freshwater discharges from runoff, rivers, and wastewater outfalls exits Puget Sound through the brackish surface outflow layer. Nutrient pollution is considered one of the largest threats to Puget Sound. There is considerable interest in understanding the effect of nutrient loads on the water quality and ecological health of Puget Sound in particular and the Salish Sea as a whole. The Washington State Department of Ecology (Ecology) contracted with Pacific Northwest National Laboratory (PNNL) to develop a coupled hydrodynamic and water quality model. The water quality model simulates algae growth, dissolved oxygen, (DO) and nutrient dynamics in Puget Sound to inform potential Puget Sound-wide nutrient management strategies. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or control human impacts to DO levels in the sensitive areas. The project did not include any additional data collection but instead relied on currently available information. This report describes model development effort conducted during the period 2009 to 2012 under a U.S. Environmental Protection Agency (EPA) cooperative agreement with PNNL, Ecology, and the University of Washington awarded under the National Estuary Program

  1. Reactive transport in porous media: Pore-network model approach compared to pore-scale model

    Science.gov (United States)

    Varloteaux, Clément; Vu, Minh Tan; Békri, Samir; Adler, Pierre M.

    2013-02-01

    Accurate determination of three macroscopic parameters governing reactive transport in porous media, namely, the apparent solute velocity, the dispersion, and the apparent reaction rate, is of key importance for predicting solute migration through reservoir aquifers. Two methods are proposed to calculate these parameters as functions of the Péclet and the Péclet-Dahmköhler numbers. In the first method called the pore-scale model (PSM), the porous medium is discretized by the level set method; the Stokes and convection-diffusion equations with reaction at the wall are solved by a finite-difference scheme. In the second method, called the pore-network model (PNM), the void space of the porous medium is represented by an idealized geometry of pore bodies joined by pore throats; the flow field is computed by solving Kirchhoff's laws and transport calculations are performed in the asymptotic regime where the solute concentration undergoes an exponential evolution with time. Two synthetic geometries of porous media are addressed by using both numerical codes. The first geometry is constructed in order to validate the hypotheses implemented in PNM. PSM is also used for a better understanding of the various reaction patterns observed in the asymptotic regime. Despite the PNM approximations, a very good agreement between the models is obtained, which shows that PNM is an accurate description of reactive transport. PNM, which can address much larger pore volumes than PSM, is used to evaluate the influence of the concentration distribution on macroscopic properties of a large irregular network reconstructed from microtomography images. The role of the dimensionless numbers and of the location and size of the largest pore bodies is highlighted.

  2. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    Energy Technology Data Exchange (ETDEWEB)

    Reyes, Luz M., E-mail: luzmarinareyes@gmail.com [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Moreno, Claudia, E-mail: claudia.moreno@cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Madriz Aguilar, Jose Edgar, E-mail: edgar.madriz@red.cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Bellini, Mauricio, E-mail: mbellini@mdp.edu.ar [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata (UNMdP), Funes 3350, C.P. 7600, Mar del Plata (Argentina); Instituto de Investigaciones Fisicas de Mar del Plata (IFIMAR) - Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)

    2012-10-22

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  3. A model for allometric scaling of mammalian metabolism with ambient heat loss

    KAUST Repository

    Kwak, Ho Sang

    2016-02-02

    Background Allometric scaling, which represents the dependence of biological trait or process relates on body size, is a long-standing subject in biological science. However, there has been no study to consider heat loss to the ambient and an insulation layer representing mammalian skin and fur for the derivation of the scaling law of metabolism. Methods A simple heat transfer model is proposed to analyze the allometry of mammalian metabolism. The present model extends existing studies by incorporating various external heat transfer parameters and additional insulation layers. The model equations were solved numerically and by an analytic heat balance approach. Results A general observation is that the present heat transfer model predicted the 2/3 surface scaling law, which is primarily attributed to the dependence of the surface area on the body mass. External heat transfer effects introduced deviations in the scaling law, mainly due to natural convection heat transfer which becomes more prominent at smaller mass. These deviations resulted in a slight modification of the scaling exponent to a value smaller than 2/3. Conclusion The finding that additional radiative heat loss and the consideration of an outer insulation fur layer attenuate these deviation effects and render the scaling law closer to 2/3 provides in silico evidence for a functional impact of heat transfer mode on the allometric scaling law in mammalian metabolism.

  4. A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media

    KAUST Repository

    Chen, Huangxin

    2016-06-01

    In this paper, we develop a two-scale reduced model for simulating the Darcy flow in two-dimensional porous media with conductive fractures. We apply the approach motivated by the embedded fracture model (EFM) to simulate the flow on the coarse scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved on unstructured grid which represents the fractures accurately, while in the EFM used on the coarse scale, the flux interaction between fractures and matrix are dealt with as a source term, and the matrix-fracture system can be resolved on structured grid. The Raviart-Thomas mixed finite element methods are used for the solution of the coupled flows in the matrix and the fractures on both fine and coarse scales. Numerical results are presented to demonstrate the efficiency of the proposed model for simulation of flow in fractured porous media.

  5. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    Science.gov (United States)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  6. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby

    2015-04-22

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  7. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture the observed locality of interactions. Traditional self-propelled particle models fail to capture the fine scale dynamics of the system. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics, while maintaining a biologically plausible perceptual range. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  8. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2016-05-01

    Full Text Available Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB.In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  9. Meso-Scale Stochastic Model for Flow and Transport in Porous Media

    Science.gov (United States)

    Tartakovsky, A. M.; Tartakovsky, D. M.; Meakin, P.

    2008-12-01

    In a homogeneous porous medium, dispersive mixing is the result of a combination of molecular diffusion (diffusive mixing) and spreading due to variations in the fluid velocity (advective mixing). In traditional Darcy(continuum)-scale models this combination is treated as a Fickian diffusion process with a macro-scale effective diffusion coefficient (the dispersion coefficient). However, dispersive mixing is very different from purely diffusive mixing and there is ample evidence that the advection-dispersion equations significantly over-predict the extent of reactions in mixing induced chemical transformations. We have developed a new meso-scale stochastic Lagrangian particle model that treats advective mixing and diffusive mixing separately. We assume that fluid flow in homogeneous porous media is governed by a stochastic Langevin equation that is obtained by adding white noise fluctuations to the momentum conservation equation. The noise represents the random interactions between the fluid and the disordered porous medium, which forces fluid flow paths to deviate from the smooth flow paths predicted by the Darcy scale continuum flow equations. The molecular diffusion of solutes carried by the fluid is governed by the classical advection-diffusion equation, which becomes stochastic due to random advection. The stochastic meso-scale model and deterministic advection-dispersion theory were used to simulate the reactive mixing of two solutions injected in parallel into a flow domain. IN the stochastic model, the transport equations were numerically solved using smoothed particle hydrodynamics (SPH), a Lagrangian particle method that has been previously applied to both deterministic and stochastic transport problems. Comparison of the two solutions revealed that the Langevin model gives better estimates of concentrations than the Darcy scale advection-dispersion model, and that the Darcy scale model significantly overestimates the amount of product in mixing induced

  10. Scale Model Simulation of Enhanced Geothermal Reservoir Creation

    Science.gov (United States)

    Gutierrez, M.; Frash, L.; Hampton, J.

    2012-12-01

    diameter may be drilled into the sample while at reservoir conditions. This allows for simulation of borehole damage as well as injector-producer schemes. Dual 70 MPa syringe pumps set to flow rates between 10 nL/min and 60 mL/min injecting into a partially cased borehole allow for fully contained fracturing treatments. A six sensor acoustic emission (AE) array is used for geometric fracture location estimation during intercept borehole drilling operations. Hydraulic sensors and a thermocouple array allow for additional monitoring and data collection as relevant to computer model validation as well as field test comparisons. The results from preliminary tests inside and outside of the cell demonstrate the functionality of the equipment while also providing some novel data on the propagation and flow characteristics of hydraulic fractures themselves.

  11. A test of the Circumplex Model of Marital and Family Systems using the Clinical Rating Scale.

    Science.gov (United States)

    Thomas, V; Ozechowski, T J

    2000-10-01

    Most studies of the Olson Circumplex Model of Marital and Family Systems have utilized a version of the Family Adaptability and Cohesion Evaluation Scales (FACES). Because FACES does not appear to operationalize the curvilinear dimension of the Circumplex Model, researchers have been pessimistic about the model's validity. However, the Clinical Rating Scale (CRS) has received some support as a curvilinear measure of the Circumplex Model. Therefore, we used the CRS rather than FACES to test the validity of the Circumplex Model hypotheses. Using a structural equation-modeling analytical approach, we found support for the hypotheses pertaining to the effects of cohesion and communication on family functioning. However, we found no support for the hypotheses pertaining to the concept of adaptability. We discuss these results in the context of previous studies of the Circumplex Model using FACES. Based on the collective findings, we propose a preliminary reformulation of the Circumplex Model.

  12. Transport upscaling from pore- to Darcy-scale: Incorporating pore-scale Berea sandstone Lagrangian velocity statistics into a Darcy-scale transport CTRW model

    Science.gov (United States)

    Puyguiraud, Alexandre; Dentz, Marco; Gouze, Philippe

    2017-04-01

    For the past several years a lot of attention has been given to pore-scale flow in order to understand and model transport, mixing and reaction in porous media. Nevertheless we believe that an accurate study of spatial and temporal evolution of velocities could bring important additional information for the upscaling from pore to higher scales. To gather these pieces of information, we perform Stokes flow simulations on pore-scale digitized images of a Berea sandstone core. First, micro-tomography (XRMT) imaging and segmentation processes allow us to obtain 3D black and white images of the sample [1]. Then we used an OpenFoam solver to perform the Stokes flow simulations mentioned above, which gives us the velocities at the interfaces of a cubic mesh. Subsequently, we use a particle streamline reconstruction technique which uses the Eulerian velocity field previously obtained. This technique, based on a modified Pollock algorithm [2], enables us to make particle tracking simulations on the digitized sample. In order to build a stochastic pore-scale transport model, we analyze the Lagrangian velocity series in two different ways. First we investigate the velocity evolution by sampling isochronically (t-Lagrangian), and by studying its statistical properties in terms of one- and two-points statistics. Intermittent patterns can be observed. These are due to the persistance of low velocities over a characteristic space length. Other results are investigated, such as correlation functions and velocity PDFs, which permit us to study more deeply this persistence in the velocities and to compute the correlation times. However, with the second approach, doing these same analysis in space by computing the velocities equidistantly, enables us to remove the intermittency shown in the temporal evolution and to model these velocity series as a Markov process. This renders the stochastic particle dynamics into a CTRW [3]. [1] Gjetvaj, F., A. Russian, P. Gouze, and M. Dentz (2015

  13. The restricted stochastic user equilibrium with threshold model: Large-scale application and parameter testing

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Nielsen, Otto Anker; Watling, David P.

    2017-01-01

    This paper presents the application and calibration of the recently proposed Restricted Stochastic User Equilibrium with Threshold model (RSUET) to a large-scale case-study. The RSUET model avoids the limitations of the well-known Stochastic User Equilibrium model (SUE) and the Deterministic User...... equilibrated set of paths which all are within a threshold relative to the cost on the cheapest path and which do not leave any attractive paths unused. Several variants of a generic RSUET solution algorithm are tested and calibrated on a large-scale case network with 18,708 arcs and about 20 million OD......-pairs, and comparisons are performed with respect to a previously proposed RSUE model as well as an existing link-based mixed Multinomial Probit (MNP) SUE model. The results show that the RSUET has very attractive computation times for large-scale applications and demonstrate that the threshold addition to the RSUE...

  14. Accomplishments in genome-scale in silico modeling for industrial and medical biotechnology.

    Science.gov (United States)

    Milne, Caroline B; Kim, Pan-Jun; Eddy, James A; Price, Nathan D

    2009-12-01

    Driven by advancements in high-throughput biological technologies and the growing number of sequenced genomes, the construction of in silico models at the genome scale has provided powerful tools to investigate a vast array of biological systems and applications. Here, we review comprehensively the uses of such models in industrial and medical biotechnology, including biofuel generation, food production, and drug development. While the use of in silico models is still in its early stages for delivering to industry, significant initial successes have been achieved. For the cases presented here, genome-scale models predict engineering strategies to enhance properties of interest in an organism or to inhibit harmful mechanisms of pathogens. Going forward, genome-scale in silico models promise to extend their application and analysis scope to become a trans-formative tool in biotechnology.

  15. Stochastic representation of the Reynolds transport theorem: revisiting large-scale modeling

    CERN Document Server

    Harouna, S Kadri

    2016-01-01

    We explore the potential of a formulation of the Navier-Stokes equations incorporating a random description of the small-scale velocity component. This model, established from a version of the Reynolds transport theorem adapted to a stochastic representation of the flow, gives rise to a large-scale description of the flow dynamics in which emerges an anisotropic subgrid tensor, reminiscent to the Reynolds stress tensor, together with a drift correction due to an inhomogeneous turbulence. The corresponding subgrid model, which depends on the small scales velocity variance, generalizes the Boussinesq eddy viscosity assumption. However, it is not anymore obtained from an analogy with molecular dissipation but ensues rigorously from the random modeling of the flow. This principle allows us to propose several subgrid models defined directly on the resolved flow component. We assess and compare numerically those models on a standard Green-Taylor vortex flow at Reynolds 1600. The numerical simulations, carried out w...

  16. Modeling of a Large-Scale High Temperature Regenerative Sulfur Removal Process

    DEFF Research Database (Denmark)

    Konttinen, Jukka T.; Johnsson, Jan Erik

    1999-01-01

    Regenerable mixed metal oxide sorbents are prime candidates for the removal of hydrogen sulfide from hot gasifier gas in the simplified integrated gasification combined cycle (IGCC) process. As part of the regenerative sulfur removal process development, reactor models are needed for scale......-up. Steady-state kinetic reactor models are needed for reactor sizing, and dynamic models can be used for process control design and operator training. The regenerative sulfur removal process to be studied in this paper consists of two side-by-side fluidized bed reactors operating at temperatures of 400...... model that does not account for bed hydrodynamics. The pilot-scale test run results, obtained in the test runs of the sulfur removal process with real coal gasifier gas, have been used for parameter estimation. The validity of the reactor model for commercial-scale design applications is discussed....

  17. Two-measure approach to breaking scale-invariance in a standard-model extension

    Directory of Open Access Journals (Sweden)

    Eduardo I. Guendelman

    2017-02-01

    Full Text Available We introduce Weyl's scale-invariance as an additional global symmetry in the standard model of electroweak interactions. A natural consequence is the introduction of general relativity coupled to scalar fields à la Dirac, that includes the Higgs doublet and a singlet σ-field required for implementing global scale-invariance. We introduce a mechanism for ‘spontaneous breaking’ of scale-invariance by introducing a coupling of the σ-field to a new metric-independent measure Φ defined in terms of four scalars ϕi (i = 1, 2, 3, 4. Global scale-invariance is regained by combining it with internal diffeomorphism of these four scalars. We show that once the global scale-invariance is broken, the phenomenon (a generates Newton's gravitational constant GN and (b triggers spontaneous symmetry breaking in the normal manner resulting in masses for the conventional fermions and bosons. In the absence of fine-tuning the scale at which the scale-symmetry breaks can be of order Planck mass. If right-handed neutrinos are also introduced, their absence at present energy scales is attributed to their mass terms tied to the scale where scale-invariance breaks.

  18. Parallelization and High-Performance Computing Enables Automated Statistical Inference of Multi-scale Models.

    Science.gov (United States)

    Jagiella, Nick; Rickert, Dennis; Theis, Fabian J; Hasenauer, Jan

    2017-02-22

    Mechanistic understanding of multi-scale biological processes, such as cell proliferation in a changing biological tissue, is readily facilitated by computational models. While tools exist to construct and simulate multi-scale models, the statistical inference of the unknown model parameters remains an open problem. Here, we present and benchmark a parallel approximate Bayesian computation sequential Monte Carlo (pABC SMC) algorithm, tailored for high-performance computing clusters. pABC SMC is fully automated and returns reliable parameter estimates and confidence intervals. By running the pABC SMC algorithm for ∼10(6) hr, we parameterize multi-scale models that accurately describe quantitative growth curves and histological data obtained in vivo from individual tumor spheroid growth in media droplets. The models capture the hybrid deterministic-stochastic behaviors of 10(5)-10(6) of cells growing in a 3D dynamically changing nutrient environment. The pABC SMC algorithm reliably converges to a consistent set of parameters. Our study demonstrates a proof of principle for robust, data-driven modeling of multi-scale biological systems and the feasibility of multi-scale model parameterization through statistical inference. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  19. Modelling the large-scale redshift-space 3-point correlation function of galaxies

    Science.gov (United States)

    Slepian, Zachary; Eisenstein, Daniel J.

    2017-08-01

    We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ∼1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.

  20. Oxidation Kinetics and Spallation Model of Oxide Scale during Cooling Process of Low Carbon Microalloyed Steel

    Science.gov (United States)

    Cao, Guangming; Li, Zhifeng; Tang, Junjian; Sun, Xianzhen; Liu, Zhenyu

    2017-09-01

    The spallation behavior of oxide scale on the surface of low carbon microalloyed steel (510L) is investigated during the laminar cooling of hot rolling strip. Surface, cross-section morphology and phase composition of oxide scale in different laminar cooling rate are observed by scanning electron microscopy (SEM) and X-Ray Diffraction (XRD). Moreover, a spallation mathematic model is established based on empirical formula to predict the critical thickness of oxide scale and the test of high temperature oxidation kinetics at different temperatures between 500 °C to 900 °C provides oxidation rate constant for the model calculation. The results of heat-treatment test and model calculation reveal that laminar cooling rate plays an important role in controlling the thickness of oxide scale and suppressing spallation behavior.