WorldWideScience

Sample records for scale collective modeling

  1. Collective dynamics of glass-forming polymers at intermediate length scales

    International Nuclear Information System (INIS)

    Colmenero, J.; Alvarez, F.; Arbe, A.

    2015-01-01

    Deep understanding of the complex dynamics taking place in glass-forming systems could potentially be gained by exploiting the information provided by the collective response monitored by coherent neutron scattering. We have revisited the question of the characterization of the collective response of polyisobutylene at intermediate length scales observed by neutron spin echo (NSE) experiments. The model, generalized for sub-linear diffusion - as it is the case of glass-forming polymers - has been successfully applied by using the information on the total self-motions available from MD-simulations properly validated by direct comparison with experimental results. From the fits of the coherent NSE data, the collective time at Q → 0 has been extracted that agrees very well with compiled results from different experimental techniques directly accessing such relaxation time. We show that a unique temperature dependence governs both, the Q → 0 and Q → ∞ asymptotic characteristic times. The generalized model also gives account for the modulation of the apparent activation energy of the collective times with the static structure factor. It mainly results from changes of the short-range order at inter-molecular length scales

  2. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    2015-02-04

    Feb 4, 2015 ... The collective behaviour of groups of social animals has been an active topic of study across many disciplines, and has a long history of modelling. Classical models have been successful in capturing the large-scale patterns formed by animal aggregations, but fare less well in accounting for details, ...

  3. Multi-scale analysis of collective behavior in 2D self-propelled particle models of swarms: An Advection-Diffusion with Memory Approach

    Science.gov (United States)

    Raghib, Michael; Levin, Simon; Kevrekidis, Ioannis

    2010-05-01

    Self-propelled particle models (SPP's) are a class of agent-based simulations that have been successfully used to explore questions related to various flavors of collective motion, including flocking, swarming, and milling. These models typically consist of particle configurations, where each particle moves with constant speed, but changes its orientation in response to local averages of the positions and orientations of its neighbors found within some interaction region. These local averages are based on `social interactions', which include avoidance of collisions, attraction, and polarization, that are designed to generate configurations that move as a single object. Errors made by the individuals in the estimates of the state of the local configuration are modeled as a random rotation of the updated orientation resulting from the social rules. More recently, SPP's have been introduced in the context of collective decision-making, where the main innovation consists of dividing the population into naïve and `informed' individuals. Whereas naïve individuals follow the classical collective motion rules, members of the informed sub-population update their orientations according to a weighted average of the social rules and a fixed `preferred' direction, shared by all the informed individuals. Collective decision-making is then understood in terms of the ability of the informed sub-population to steer the whole group along the preferred direction. Summary statistics of collective decision-making are defined in terms of the stochastic properties of the random walk followed by the centroid of the configuration as the particles move about, in particular the scaling behavior of the mean squared displacement (msd). For the region of parameters where the group remains coherent , we note that there are two characteristic time scales, first there is an anomalous transient shared by both purely naïve and informed configurations, i.e. the scaling exponent lies between 1 and

  4. Collective memory in primate conflict implied by temporal scaling collapse.

    Science.gov (United States)

    Lee, Edward D; Daniels, Bryan C; Krakauer, David C; Flack, Jessica C

    2017-09-01

    In biological systems, prolonged conflict is costly, whereas contained conflict permits strategic innovation and refinement. Causes of variation in conflict size and duration are not well understood. We use a well-studied primate society model system to study how conflicts grow. We find conflict duration is a 'first to fight' growth process that scales superlinearly, with the number of possible pairwise interactions. This is in contrast with a 'first to fail' process that characterizes peaceful durations. Rescaling conflict distributions reveals a universal curve, showing that the typical time scale of correlated interactions exceeds nearly all individual fights. This temporal correlation implies collective memory across pairwise interactions beyond those assumed in standard models of contagion growth or iterated evolutionary games. By accounting for memory, we make quantitative predictions for interventions that mitigate or enhance the spread of conflict. Managing conflict involves balancing the efficient use of limited resources with an intervention strategy that allows for conflict while keeping it contained and controlled. © 2017 The Author(s).

  5. Algebraic formulation of collective models. I. The mass quadrupole collective model

    International Nuclear Information System (INIS)

    Rosensteel, G.; Rowe, D.J.

    1979-01-01

    This paper is the first in a series of three which together present a microscopic formulation of the Bohr--Mottelson (BM) collective model of the nucleus. In this article the mass quadrupole collective (MQC) model is defined and shown to be a generalization of the BM model. The MQC model eliminates the small oscillation assumption of BM and also yields the rotational and CM (3) submodels by holonomic constraints on the MQC configuration space. In addition, the MQC model is demonstrated to be an algebraic model, so that the state space of the MQC model carries an irrep of a Lie algebra of microscopic observables, the MQC algebra. An infinite class of new collective models is then given by the various inequivalent irreps of this algebra. A microscopic embedding of the BM model is achieved by decomposing the representation of the MQC algebra on many-particle state space into its irreducible components. In the second paper this decomposition is studied in detail. The third paper presents the symplectic model, which provides the realization of the collective model in the harmonic oscillator shell model

  6. Leadership solves collective action problems in small-scale societies

    Science.gov (United States)

    Glowacki, Luke; von Rueden, Chris

    2015-01-01

    Observation of leadership in small-scale societies offers unique insights into the evolution of human collective action and the origins of sociopolitical complexity. Using behavioural data from the Tsimane forager-horticulturalists of Bolivia and Nyangatom nomadic pastoralists of Ethiopia, we evaluate the traits of leaders and the contexts in which leadership becomes more institutional. We find that leaders tend to have more capital, in the form of age-related knowledge, body size or social connections. These attributes can reduce the costs leaders incur and increase the efficacy of leadership. Leadership becomes more institutional in domains of collective action, such as resolution of intragroup conflict, where collective action failure threatens group integrity. Together these data support the hypothesis that leadership is an important means by which collective action problems are overcome in small-scale societies. PMID:26503683

  7. Leadership solves collective action problems in small-scale societies.

    Science.gov (United States)

    Glowacki, Luke; von Rueden, Chris

    2015-12-05

    Observation of leadership in small-scale societies offers unique insights into the evolution of human collective action and the origins of sociopolitical complexity. Using behavioural data from the Tsimane forager-horticulturalists of Bolivia and Nyangatom nomadic pastoralists of Ethiopia, we evaluate the traits of leaders and the contexts in which leadership becomes more institutional. We find that leaders tend to have more capital, in the form of age-related knowledge, body size or social connections. These attributes can reduce the costs leaders incur and increase the efficacy of leadership. Leadership becomes more institutional in domains of collective action, such as resolution of intragroup conflict, where collective action failure threatens group integrity. Together these data support the hypothesis that leadership is an important means by which collective action problems are overcome in small-scale societies. © 2015 The Author(s).

  8. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    2012-01-01

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  9. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture the observed locality of interactions. Traditional self-propelled particle models fail to capture the fine scale dynamics of the system. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics, while maintaining a biologically plausible perceptual range. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  10. Modeling the efficiency of a magnetic needle for collecting magnetic cells

    International Nuclear Information System (INIS)

    Butler, Kimberly S; Lovato, Debbie M; Larson, Richard S; Adolphi, Natalie L; Bryant, H C; Flynn, Edward R

    2014-01-01

    As new magnetic nanoparticle-based technologies are developed and new target cells are identified, there is a critical need to understand the features important for magnetic isolation of specific cells in fluids, an increasingly important tool in disease research and diagnosis. To investigate magnetic cell collection, cell-sized spherical microparticles, coated with superparamagnetic nanoparticles, were suspended in (1) glycerine–water solutions, chosen to approximate the range of viscosities of bone marrow, and (2) water in which 3, 5, 10 and 100% of the total suspended microspheres are coated with magnetic nanoparticles, to model collection of rare magnetic nanoparticle-coated cells from a mixture of cells in a fluid. The magnetic microspheres were collected on a magnetic needle, and we demonstrate that the collection efficiency versus time can be modeled using a simple, heuristically-derived function, with three physically-significant parameters. The function enables experimentally-obtained collection efficiencies to be scaled to extract the effective drag of the suspending medium. The results of this analysis demonstrate that the effective drag scales linearly with fluid viscosity, as expected. Surprisingly, increasing the number of non-magnetic microspheres in the suspending fluid results increases the collection of magnetic microspheres, corresponding to a decrease in the effective drag of the medium. (paper)

  11. Modeling the efficiency of a magnetic needle for collecting magnetic cells

    Science.gov (United States)

    Butler, Kimberly S.; Adolphi, Natalie L.; Bryant, H. C.; Lovato, Debbie M.; Larson, Richard S.; Flynn, Edward R.

    2014-07-01

    As new magnetic nanoparticle-based technologies are developed and new target cells are identified, there is a critical need to understand the features important for magnetic isolation of specific cells in fluids, an increasingly important tool in disease research and diagnosis. To investigate magnetic cell collection, cell-sized spherical microparticles, coated with superparamagnetic nanoparticles, were suspended in (1) glycerine-water solutions, chosen to approximate the range of viscosities of bone marrow, and (2) water in which 3, 5, 10 and 100% of the total suspended microspheres are coated with magnetic nanoparticles, to model collection of rare magnetic nanoparticle-coated cells from a mixture of cells in a fluid. The magnetic microspheres were collected on a magnetic needle, and we demonstrate that the collection efficiency versus time can be modeled using a simple, heuristically-derived function, with three physically-significant parameters. The function enables experimentally-obtained collection efficiencies to be scaled to extract the effective drag of the suspending medium. The results of this analysis demonstrate that the effective drag scales linearly with fluid viscosity, as expected. Surprisingly, increasing the number of non-magnetic microspheres in the suspending fluid results increases the collection of magnetic microspheres, corresponding to a decrease in the effective drag of the medium.

  12. Innovative Techniques for Large-Scale Collection, Processing, and Storage of Eelgrass (Zostera marina) Seeds

    National Research Council Canada - National Science Library

    Orth, Robert J; Marion, Scott R

    2007-01-01

    .... Although methods for hand-collecting, processing and storing eelgrass seeds have advanced to match the scale of collections, the number of seeds collected has limited the scale of restoration efforts...

  13. Biointerface dynamics--Multi scale modeling considerations.

    Science.gov (United States)

    Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko

    2015-08-01

    Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Validating a continental-scale groundwater diffuse pollution model using regional datasets.

    Science.gov (United States)

    Ouedraogo, Issoufou; Defourny, Pierre; Vanclooster, Marnik

    2017-12-11

    In this study, we assess the validity of an African-scale groundwater pollution model for nitrates. In a previous study, we identified a statistical continental-scale groundwater pollution model for nitrate. The model was identified using a pan-African meta-analysis of available nitrate groundwater pollution studies. The model was implemented in both Random Forest (RF) and multiple regression formats. For both approaches, we collected as predictors a comprehensive GIS database of 13 spatial attributes, related to land use, soil type, hydrogeology, topography, climatology, region typology, nitrogen fertiliser application rate, and population density. In this paper, we validate the continental-scale model of groundwater contamination by using a nitrate measurement dataset from three African countries. We discuss the issue of data availability, and quality and scale issues, as challenges in validation. Notwithstanding that the modelling procedure exhibited very good success using a continental-scale dataset (e.g. R 2  = 0.97 in the RF format using a cross-validation approach), the continental-scale model could not be used without recalibration to predict nitrate pollution at the country scale using regional data. In addition, when recalibrating the model using country-scale datasets, the order of model exploratory factors changes. This suggests that the structure and the parameters of a statistical spatially distributed groundwater degradation model for the African continent are strongly scale dependent.

  15. The algebraic collective model

    International Nuclear Information System (INIS)

    Rowe, D.J.; Turner, P.S.

    2005-01-01

    A recently proposed computationally tractable version of the Bohr collective model is developed to the extent that we are now justified in describing it as an algebraic collective model. The model has an SU(1,1)xSO(5) algebraic structure and a continuous set of exactly solvable limits. Moreover, it provides bases for mixed symmetry collective model calculations. However, unlike the standard realization of SU(1,1), used for computing beta wave functions and their matrix elements in a spherical basis, the algebraic collective model makes use of an SU(1,1) algebra that generates wave functions appropriate for deformed nuclei with intrinsic quadrupole moments ranging from zero to any large value. A previous paper focused on the SO(5) wave functions, as SO(5) (hyper-)spherical harmonics, and computation of their matrix elements. This paper gives analytical expressions for the beta matrix elements needed in applications of the model and illustrative results to show the remarkable gain in efficiency that is achieved by using such a basis in collective model calculations for deformed nuclei

  16. Microscopic collective models of nuclei

    International Nuclear Information System (INIS)

    Lovas, Rezsoe

    1985-01-01

    Microscopic Rosensteel-Rowe theory of the nuclear collective motion is described. The theoretical insufficiency of the usual microscopic establishment of the collective model is pointed. The new model treating exactly the degrees of freedom separates the coordinates describing the collective motion and the internal coordinates by a consistent way. Group theoretical methods analyzing the symmetry properties of the total Hamiltonian are used defining the collective subspaces transforming as irreducible representations of the group formed by the collective operators. Recent calculations show that although the results of the usual collective model are approximately correct and similar to those of the new microscopic collective model, the underlying philosophy of the old model is essentially erroneous. (D.Gy.)

  17. Mean-cluster approach indicates cell sorting time scales are determined by collective dynamics

    Science.gov (United States)

    Beatrici, Carine P.; de Almeida, Rita M. C.; Brunnet, Leonardo G.

    2017-03-01

    Cell migration is essential to cell segregation, playing a central role in tissue formation, wound healing, and tumor evolution. Considering random mixtures of two cell types, it is still not clear which cell characteristics define clustering time scales. The mass of diffusing clusters merging with one another is expected to grow as td /d +2 when the diffusion constant scales with the inverse of the cluster mass. Cell segregation experiments deviate from that behavior. Explanations for that could arise from specific microscopic mechanisms or from collective effects, typical of active matter. Here we consider a power law connecting diffusion constant and cluster mass to propose an analytic approach to model cell segregation where we explicitly take into account finite-size corrections. The results are compared with active matter model simulations and experiments available in the literature. To investigate the role played by different mechanisms we considered different hypotheses describing cell-cell interaction: differential adhesion hypothesis and different velocities hypothesis. We find that the simulations yield normal diffusion for long time intervals. Analytic and simulation results show that (i) cluster evolution clearly tends to a scaling regime, disrupted only at finite-size limits; (ii) cluster diffusion is greatly enhanced by cell collective behavior, such that for high enough tendency to follow the neighbors, cluster diffusion may become independent of cluster size; (iii) the scaling exponent for cluster growth depends only on the mass-diffusion relation, not on the detailed local segregation mechanism. These results apply for active matter systems in general and, in particular, the mechanisms found underlying the increase in cell sorting speed certainly have deep implications in biological evolution as a selection mechanism.

  18. Scale Model Thruster Acoustic Measurement Results

    Science.gov (United States)

    Vargas, Magda; Kenny, R. Jeremy

    2013-01-01

    The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.

  19. Coherent density fluctuation model as a local-scale limit to ATDHF

    International Nuclear Information System (INIS)

    Antonov, A.N.; Petkov, I.Zh.; Stoitsov, M.V.

    1985-04-01

    The local scale transformation method is used for the construction of an Adiabatic Time-Dependent Hartree-Fock approach in terms of the local density distribution. The coherent density fluctuation relations of the model result in a particular case when the ''flucton'' local density is connected with the plane wave determinant model function be means of the local-scale coordinate transformation. The collective potential energy expression is obtained and its relation to the nuclear matter energy saturation curve is revealed. (author)

  20. Wyoming greater sage-grouse habitat prioritization: A collection of multi-scale seasonal models and geographic information systems land management tools

    Science.gov (United States)

    O'Donnell, Michael S.; Aldridge, Cameron L.; Doherty, Kevin E.; Fedy, Bradley C.

    2015-01-01

    With rapidly changing landscape conditions within Wyoming and the potential effects of landscape changes on sage-grouse habitat, land managers and conservation planners, among others, need procedures to assess the location and juxtaposition of important habitats, land-cover, and land-use patterns to balance wildlife requirements with multiple human land uses. Biologists frequently develop habitat-selection studies to identify prioritization efforts for species of conservation concern to increase understanding and help guide habitat-conservation efforts. Recently, the authors undertook a large-scale collaborative effort that developed habitat-selection models for Greater Sage-grouse (Centrocercus urophasianus) across large landscapes in Wyoming, USA and for multiple life-stages (nesting, late brood-rearing, and winter). We developed these habitat models using resource selection functions, based upon sage-grouse telemetry data collected for localized studies and within each life-stage. The models allowed us to characterize and spatially predict seasonal sage-grouse habitat use in Wyoming. Due to the quantity of models, the diversity of model predictors (in the form of geographic information system data) produced by analyses, and the variety of potential applications for these data, we present here a resource that complements our published modeling effort, which will further support land managers.

  1. Agri-Environmental Resource Management by Large-Scale Collective Action: Determining KEY Success Factors

    Science.gov (United States)

    Uetake, Tetsuya

    2015-01-01

    Purpose: Large-scale collective action is necessary when managing agricultural natural resources such as biodiversity and water quality. This paper determines the key factors to the success of such action. Design/Methodology/Approach: This paper analyses four large-scale collective actions used to manage agri-environmental resources in Canada and…

  2. Scaling analysis and model estimation of solar corona index

    Science.gov (United States)

    Ray, Samujjwal; Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik

    2018-04-01

    A monthly average solar green coronal index time series for the period from January 1939 to December 2008 collected from NOAA (The National Oceanic and Atmospheric Administration) has been analysed in this paper in perspective of scaling analysis and modelling. Smoothed and de-noising have been done using suitable mother wavelet as a pre-requisite. The Finite Variance Scaling Method (FVSM), Higuchi method, rescaled range (R/S) and a generalized method have been applied to calculate the scaling exponents and fractal dimensions of the time series. Autocorrelation function (ACF) is used to find autoregressive (AR) process and Partial autocorrelation function (PACF) has been used to get the order of AR model. Finally a best fit model has been proposed using Yule-Walker Method with supporting results of goodness of fit and wavelet spectrum. The results reveal an anti-persistent, Short Range Dependent (SRD), self-similar property with signatures of non-causality, non-stationarity and nonlinearity in the data series. The model shows the best fit to the data under observation.

  3. Open source large-scale high-resolution environmental modelling with GEMS

    NARCIS (Netherlands)

    Baarsma, R.J.; Alberti, K.; Marra, W.A.; Karssenberg, D.J.

    2016-01-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however,

  4. A multi scale model for small scale plasticity

    International Nuclear Information System (INIS)

    Zbib, Hussein M.

    2002-01-01

    Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band

  5. Small scale models equal large scale savings

    International Nuclear Information System (INIS)

    Lee, R.; Segroves, R.

    1994-01-01

    A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)

  6. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    International Nuclear Information System (INIS)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-01-01

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will

  7. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-04-19

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will

  8. Finite-size scaling a collection of reprints

    CERN Document Server

    1988-01-01

    Over the past few years, finite-size scaling has become an increasingly important tool in studies of critical systems. This is partly due to an increased understanding of finite-size effects by analytical means, and partly due to our ability to treat larger systems with large computers. The aim of this volume was to collect those papers which have been important for this progress and which illustrate novel applications of the method. The emphasis has been placed on relatively recent developments, including the use of the &egr;-expansion and of conformal methods.

  9. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  10. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  11. When is best-worst best? A comparison of best-worst scaling, numeric estimation, and rating scales for collection of semantic norms.

    Science.gov (United States)

    Hollis, Geoff; Westbury, Chris

    2018-02-01

    Large-scale semantic norms have become both prevalent and influential in recent psycholinguistic research. However, little attention has been directed towards understanding the methodological best practices of such norm collection efforts. We compared the quality of semantic norms obtained through rating scales, numeric estimation, and a less commonly used judgment format called best-worst scaling. We found that best-worst scaling usually produces norms with higher predictive validities than other response formats, and does so requiring less data to be collected overall. We also found evidence that the various response formats may be producing qualitatively, rather than just quantitatively, different data. This raises the issue of potential response format bias, which has not been addressed by previous efforts to collect semantic norms, likely because of previous reliance on a single type of response format for a single type of semantic judgment. We have made available software for creating best-worst stimuli and scoring best-worst data. We also made available new norms for age of acquisition, valence, arousal, and concreteness collected using best-worst scaling. These norms include entries for 1,040 words, of which 1,034 are also contained in the ANEW norms (Bradley & Lang, Affective norms for English words (ANEW): Instruction manual and affective ratings (pp. 1-45). Technical report C-1, the center for research in psychophysiology, University of Florida, 1999).

  12. Original article Validation of the Polish version of the Collective Self-Esteem Scale

    Directory of Open Access Journals (Sweden)

    Róża Bazińska

    2015-07-01

    Full Text Available Background The aim of this article is to present research on the validity and reliability of the Collective Self-Esteem Scale (CSES for the Polish population. The CSES is a measure of individual differences in collective self-esteem, understood as the global evaluation of one’s own social (collective identity. Participants and procedure Participants from two samples (n = 466 and n = 1,009 completed a paper-pencil set of questionnaires which contained the CSES and the Rosenberg Self-Esteem Scale (RSES, and subsets of participants completed scales related to a sense of belonging, well-being and psychological distress (anxiety and depression. Results Like the original version, the Polish version of the CSES comprises 16 items which form the four dimensions of collective self-esteem: Public collective self-esteem, Private collective self-esteem, Membership esteem and Importance of Identity. The results confirm the four-factor structure of the Polish version of the CSES, support the whole Polish version of the CSES as well as its subscales, which represent satisfactory reliability and stability, and provide initial evidence of construct validity. Conclusions As the results of the study indicate, the Polish version of the CSES is a valid and reliable self-report measure for assessing the global self-esteem derived from membership of a group and has proved to be useful in the Polish context.

  13. Hydrological Modelling of Small Scale Processes in a Wetland Habitat

    DEFF Research Database (Denmark)

    Johansen, Ole; Jensen, Jacob Birk; Pedersen, Morten Lauge

    2009-01-01

    Numerical modelling of the hydrology in a Danish rich fen area has been conducted. By collecting various data in the field the model has been successfully calibrated and the flow paths as well as the groundwater discharge distribution have been simulated in details. The results of this work have...... shown that distributed numerical models can be applied to local scale problems and that natural springs, ditches, the geological conditions as well as the local topographic variations have a significant influence on the flow paths in the examined rich fen area....

  14. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  15. Multi-Scale Models for the Scale Interaction of Organized Tropical Convection

    Science.gov (United States)

    Yang, Qiu

    Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.

  16. Large-scale fabrication of bioinspired fibers for directional water collection.

    Science.gov (United States)

    Bai, Hao; Sun, Ruize; Ju, Jie; Yao, Xi; Zheng, Yongmei; Jiang, Lei

    2011-12-16

    Spider-silk inspired functional fibers with periodic spindle-knots and the ability to collect water in a directional manner are fabricated on a large scale using a fluid coating method. The fabrication process is investigated in detail, considering factors like the fiber-drawing velocity, solution viscosity, and surface tension. These bioinspired fibers are inexpensive and durable, which makes it possible to collect water from fog in a similar manner to a spider's web. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. True and apparent scaling: The proximity of the Markov-switching multifractal model to long-range dependence

    Science.gov (United States)

    Liu, Ruipeng; Di Matteo, T.; Lux, Thomas

    2007-09-01

    In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov-switching multifractal (MSM) model. In order to see how well the estimated model captures the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q=1,2) for both empirical data and simulated data of the MSM model. In most cases the multifractal model appears to generate ‘apparent’ long memory in agreement with the empirical scaling laws.

  18. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  19. A structural equation modelling of the academic self-concept scale

    Directory of Open Access Journals (Sweden)

    Musa Matovu

    2014-03-01

    Full Text Available The study aimed at validating the academic self-concept scale by Liu and Wang (2005 in measuring academic self-concept among university students. Structural equation modelling was used to validate the scale which was composed of two subscales; academic confidence and academic effort. The study was conducted on university students; males and females from different levels of study and faculties. In this study the influence of academic self-concept on academic achievement was assessed, tested whether the hypothesised model fitted the data, analysed the invariance of the path coefficients among the moderating variables, and also, highlighted whether academic confidence and academic effort measured academic selfconcept. The results from the model revealed that academic self-concept influenced academic achievement and the hypothesised model fitted the data. The results also supported the model as the causal structure was not sensitive to gender, levels of study, and faculties of students; hence, applicable to all the groups taken as moderating variables. It was also noted that academic confidence and academic effort are a measure of academic self-concept. According to the results the academic self-concept scale by Liu and Wang (2005 was deemed adequate in collecting information about academic self-concept among university students.

  20. Collective excitability in a mesoscopic neuronal model of epileptic activity

    Science.gov (United States)

    Jedynak, Maciej; Pons, Antonio J.; Garcia-Ojalvo, Jordi

    2018-01-01

    At the mesoscopic scale, the brain can be understood as a collection of interacting neuronal oscillators, but the extent to which its sustained activity is due to coupling among brain areas is still unclear. Here we address this issue in a simplified situation by examining the effect of coupling between two cortical columns described via Jansen-Rit neural mass models. Our results show that coupling between the two neuronal populations gives rise to stochastic initiations of sustained collective activity, which can be interpreted as epileptic events. For large enough coupling strengths, termination of these events results mainly from the emergence of synchronization between the columns, and thus it is controlled by coupling instead of noise. Stochastic triggering and noise-independent durations are characteristic of excitable dynamics, and thus we interpret our results in terms of collective excitability.

  1. Bridging scales through multiscale modeling: A case study on Protein Kinase A

    Directory of Open Access Journals (Sweden)

    Sophia P Hirakis

    2015-09-01

    Full Text Available The goal of multiscale modeling in biology is to use structurally based physico-chemical models to integrate across temporal and spatial scales of biology and thereby improve mechanistic understanding of, for example, how a single mutation can alter organism-scale phenotypes. This approach may also inform therapeutic strategies or identify candidate drug targets that might otherwise have been overlooked. However, in many cases, it remains unclear how best to synthesize information obtained from various scales and analysis approaches, such as atomistic molecular models, Markov state models (MSM, subcellular network models, and whole cell models. In this paper, we use protein kinase A (PKA activation as a case study to explore how computational methods that model different physical scales can complement each other and integrate into an improved multiscale representation of the biological mechanisms. Using measured crystal structures, we show how molecular dynamics (MD simulations coupled with atomic-scale MSMs can provide conformations for Brownian dynamics (BD simulations to feed transitional states and kinetic parameters into protein-scale MSMs. We discuss how milestoning can give reaction probabilities and forward-rate constants of cAMP association events by seamlessly integrating MD and BD simulation scales. These rate constants coupled with MSMs provide a robust representation of the free energy landscape, enabling access to kinetic and thermodynamic parameters unavailable from current experimental data. These approaches have helped to illuminate the cooperative nature of PKA activation in response to distinct cAMP binding events. Collectively, this approach exemplifies a general strategy for multiscale model development that is applicable to a wide range of biological problems.

  2. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....

  3. Scaling up Ecological Measurements of Coral Reefs Using Semi-Automated Field Image Collection and Analysis

    Directory of Open Access Journals (Sweden)

    Manuel González-Rivero

    2016-01-01

    Full Text Available Ecological measurements in marine settings are often constrained in space and time, with spatial heterogeneity obscuring broader generalisations. While advances in remote sensing, integrative modelling and meta-analysis enable generalisations from field observations, there is an underlying need for high-resolution, standardised and geo-referenced field data. Here, we evaluate a new approach aimed at optimising data collection and analysis to assess broad-scale patterns of coral reef community composition using automatically annotated underwater imagery, captured along 2 km transects. We validate this approach by investigating its ability to detect spatial (e.g., across regions and temporal (e.g., over years change, and by comparing automated annotation errors to those of multiple human annotators. Our results indicate that change of coral reef benthos can be captured at high resolution both spatially and temporally, with an average error below 5%, among key benthic groups. Cover estimation errors using automated annotation varied between 2% and 12%, slightly larger than human errors (which varied between 1% and 7%, but small enough to detect significant changes among dominant groups. Overall, this approach allows a rapid collection of in-situ observations at larger spatial scales (km than previously possible, and provides a pathway to link, calibrate, and validate broader analyses across even larger spatial scales (10–10,000 km2.

  4. Multi-scale modelling and numerical simulation of electronic kinetic transport

    International Nuclear Information System (INIS)

    Duclous, R.

    2009-11-01

    This research thesis which is at the interface between numerical analysis, plasma physics and applied mathematics, deals with the kinetic modelling and numerical simulations of the electron energy transport and deposition in laser-produced plasmas, having in view the processes of fuel assembly to temperature and density conditions necessary to ignite fusion reactions. After a brief review of the processes at play in the collisional kinetic theory of plasmas, with a focus on basic models and methods to implement, couple and validate them, the author focuses on the collective aspect related to the free-streaming electron transport equation in the non-relativistic limit as well as in the relativistic regime. He discusses the numerical development and analysis of the scheme for the Vlasov-Maxwell system, and the selection of a validation procedure and numerical tests. Then, he investigates more specific aspects of the collective transport: the multi-specie transport, submitted to phase-space discontinuities. Dealing with the multi-scale physics of electron transport with collision source terms, he validates the accuracy of a fast Monte Carlo multi-grid solver for the Fokker-Planck-Landau electron-electron collision operator. He reports realistic simulations for the kinetic electron transport in the frame of the shock ignition scheme, the development and validation of a reduced electron transport angular model. He finally explores the relative importance of the processes involving electron-electron collisions at high energy by means a multi-scale reduced model with relativistic Boltzmann terms

  5. Spatial scale separation in regional climate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Feser, F.

    2005-07-01

    In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter

  6. Economic Well-Being and Poverty Among the Elderly : An Analysis Based on a Collective Consumption Model

    NARCIS (Netherlands)

    Cherchye, L.J.H.; de Rock, B.; Vermeulen, F.M.P.

    2008-01-01

    We apply the collective consumption model of Browning, Chiappori and Lew- bel (2006) to analyse economic well-being and poverty among the elderly. The model focuses on individual preferences, a consumption technology that captures the economies of scale of living in a couple, and a sharing rule that

  7. Modeling Charge Collection in Detector Arrays

    Science.gov (United States)

    Hardage, Donna (Technical Monitor); Pickel, J. C.

    2003-01-01

    A detector array charge collection model has been developed for use as an engineering tool to aid in the design of optical sensor missions for operation in the space radiation environment. This model is an enhancement of the prototype array charge collection model that was developed for the Next Generation Space Telescope (NGST) program. The primary enhancements were accounting for drift-assisted diffusion by Monte Carlo modeling techniques and implementing the modeling approaches in a windows-based code. The modeling is concerned with integrated charge collection within discrete pixels in the focal plane array (FPA), with high fidelity spatial resolution. It is applicable to all detector geometries including monolithc charge coupled devices (CCDs), Active Pixel Sensors (APS) and hybrid FPA geometries based on a detector array bump-bonded to a readout integrated circuit (ROIC).

  8. Ground-water solute transport modeling using a three-dimensional scaled model

    International Nuclear Information System (INIS)

    Crider, S.S.

    1987-01-01

    Scaled models are used extensively in current hydraulic research on sediment transport and solute dispersion in free surface flows (rivers, estuaries), but are neglected in current ground-water model research. Thus, an investigation was conducted to test the efficacy of a three-dimensional scaled model of solute transport in ground water. No previous results from such a model have been reported. Experiments performed on uniform scaled models indicated that some historical problems (e.g., construction and scaling difficulties; disproportionate capillary rise in model) were partly overcome by using simple model materials (sand, cement and water), by restricting model application to selective classes of problems, and by physically controlling the effect of the model capillary zone. Results from these tests were compared with mathematical models. Model scaling laws were derived for ground-water solute transport and used to build a three-dimensional scaled model of a ground-water tritium plume in a prototype aquifer on the Savannah River Plant near Aiken, South Carolina. Model results compared favorably with field data and with a numerical model. Scaled models are recommended as a useful additional tool for prediction of ground-water solute transport

  9. Mathematical modeling of nitrous oxide (N2O) emissions from full-scale wastewater treatment plants.

    Science.gov (United States)

    Ni, Bing-Jie; Ye, Liu; Law, Yingyu; Byers, Craig; Yuan, Zhiguo

    2013-07-16

    Mathematical modeling of N2O emissions is of great importance toward understanding the whole environmental impact of wastewater treatment systems. However, information on modeling of N2O emissions from full-scale wastewater treatment plants (WWTP) is still sparse. In this work, a mathematical model based on currently known or hypothesized metabolic pathways for N2O productions by heterotrophic denitrifiers and ammonia-oxidizing bacteria (AOB) is developed and calibrated to describe the N2O emissions from full-scale WWTPs. The model described well the dynamic ammonium, nitrite, nitrate, dissolved oxygen (DO) and N2O data collected from both an open oxidation ditch (OD) system with surface aerators and a sequencing batch reactor (SBR) system with bubbling aeration. The obtained kinetic parameters for N2O production are found to be reasonable as the 95% confidence regions of the estimates are all small with mean values approximately at the center. The model is further validated with independent data sets collected from the same two WWTPs. This is the first time that mathematical modeling of N2O emissions is conducted successfully for full-scale WWTPs. While clearly showing that the NH2OH related pathways could well explain N2O production and emission in the two full-scale plants studied, the modeling results do not prove the dominance of the NH2OH pathways in these plants, nor rule out the possibility of AOB denitrification being a potentially dominating pathway in other WWTPs that are designed or operated differently.

  10. Scaling laws for modeling nuclear reactor systems

    International Nuclear Information System (INIS)

    Nahavandi, A.N.; Castellana, F.S.; Moradkhanian, E.N.

    1979-01-01

    Scale models are used to predict the behavior of nuclear reactor systems during normal and abnormal operation as well as under accident conditions. Three types of scaling procedures are considered: time-reducing, time-preserving volumetric, and time-preserving idealized model/prototype. The necessary relations between the model and the full-scale unit are developed for each scaling type. Based on these relationships, it is shown that scaling procedures can lead to distortion in certain areas that are discussed. It is advised that, depending on the specific unit to be scaled, a suitable procedure be chosen to minimize model-prototype distortion

  11. Modeling a full-scale primary sedimentation tank using artificial neural networks.

    Science.gov (United States)

    Gamal El-Din, A; Smith, D W

    2002-05-01

    Modeling the performance of full-scale primary sedimentation tanks has been commonly done using regression-based models, which are empirical relationships derived strictly from observed daily average influent and effluent data. Another approach to model a sedimentation tank is using a hydraulic efficiency model that utilizes tracer studies to characterize the performance of model sedimentation tanks based on eddy diffusion. However, the use of hydraulic efficiency models to predict the dynamic behavior of a full-scale sedimentation tank is very difficult as the development of such models has been done using controlled studies of model tanks. In this paper, another type of model, namely artificial neural network modeling approach, is used to predict the dynamic response of a full-scale primary sedimentation tank. The neuralmodel consists of two separate networks, one uses flow and influent total suspended solids data in order to predict the effluent total suspended solids from the tank, and the other makes predictions of the effluent chemical oxygen demand using data of the flow and influent chemical oxygen demand as inputs. An extensive sampling program was conducted in order to collect a data set to be used in training and validating the networks. A systematic approach was used in the building process of the model which allowed the identification of a parsimonious neural model that is able to learn (and not memorize) from past data and generalize very well to unseen data that were used to validate the model. Theresults seem very promising. The potential of using the model as part of a real-time process control system isalso discussed.

  12. Supporting SME Collecting Organisations: A Business Model Framework for Digital Heritage Collections

    Directory of Open Access Journals (Sweden)

    Darren Peacock

    2009-08-01

    Full Text Available Increasing numbers of heritage collecting organisations such as archives, galleries, libraries and museums are moving towards the provision of digital content and services based on the collections they hold. The collections sector in Australia is characterised by a diverse range of often very small organisations, many of which are struggling with the transition to digital service delivery. One major reason for this struggle is the lack of suitable underlying business models for these organisations as they attempt to achieve a sustainable digital presence. The diverse characteristics of organisations within the collections sector make it difficult, if not impossible, to identify a single business model suitable for all organisations. We argue in this paper that the development of a flexible e-business model framework is a more useful strategy for achieving this goal. This paper presents a preliminary framework based on the literature, utilising the Core + Complement (C+ Business Model Framework for Content Providers initially developed by Krueger et al. (2003 and outlines how the framework will be refined and investigated empirically in future research within the Australian collections sector.

  13. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  14. Application of soil venting at a large scale: A data and modeling analysis

    Energy Technology Data Exchange (ETDEWEB)

    Walton, J.C.; Baca, R.G.; Sisson, J.B.; Wood, T.R.

    1990-02-27

    Soil venting will be applied at a demonstration scale to a site at the Idaho National Engineering Laboratory which is contaminated with carbon tetrachloride and other organic vapors. The application of soil venting at the site is unique in several aspects including scale, geology, and data collection. The containmented portion of the site has a surface area of over 47,000 square meters (12 acres) and the depth to the water table is approximately 180 meters. Migration of contaminants through the entire depth of the vadose zone is evidenced by measured levels of chlorinated solvents in the underlying aquifer. The geology of the site consists of a series of layered basalt flows interspersed with sedimentary interbeds. The depth of the vadose zone, the nature of fractured basalt flows, and the degree of contamination all tend to make drilling difficult and expensive. Because of the scale of the site, extent of contamination, and expense of drilling, a computer model has been developed to simulate the migration of the chlorinated solvents during plume growth and cleanup. The demonstration soil venting operation has been designed to collect pressure drop and plume migration data to assist with calibration of the transport model. The model will then be used to help design a cost-effective system for site cleanup which will minimize the drilling required. This paper discusses mathematical models which have been developed to estimate the growth and eventful cleanup of the site. 12 refs., 4 figs.

  15. Original article Validation of the Polish version of the Collective Self-Esteem Scale

    OpenAIRE

    Róża Bazińska

    2015-01-01

    Background The aim of this article is to present research on the validity and reliability of the Collective Self-Esteem Scale (CSES) for the Polish population. The CSES is a measure of individual differences in collective self-esteem, understood as the global evaluation of one’s own social (collective) identity. Participants and procedure Participants from two samples (n = 466 and n = 1,009) completed a paper-pencil set of questionnaires which contained the CSES and the Ro...

  16. Predictive Modelling to Identify Near-Shore, Fine-Scale Seabird Distributions during the Breeding Season.

    Science.gov (United States)

    Warwick-Evans, Victoria C; Atkinson, Philip W; Robinson, Leonie A; Green, Jonathan A

    2016-01-01

    During the breeding season seabirds are constrained to coastal areas and are restricted in their movements, spending much of their time in near-shore waters either loafing or foraging. However, in using these areas they may be threatened by anthropogenic activities such as fishing, watersports and coastal developments including marine renewable energy installations. Although many studies describe large scale interactions between seabirds and the environment, the drivers behind near-shore, fine-scale distributions are not well understood. For example, Alderney is an important breeding ground for many species of seabird and has a diversity of human uses of the marine environment, thus providing an ideal location to investigate the near-shore fine-scale interactions between seabirds and the environment. We used vantage point observations of seabird distribution, collected during the 2013 breeding season in order to identify and quantify some of the environmental variables affecting the near-shore, fine-scale distribution of seabirds in Alderney's coastal waters. We validate the models with observation data collected in 2014 and show that water depth, distance to the intertidal zone, and distance to the nearest seabird nest are key predictors in the distribution of Alderney's seabirds. AUC values for each species suggest that these models perform well, although the model for shags performed better than those for auks and gulls. While further unexplained underlying localised variation in the environmental conditions will undoubtedly effect the fine-scale distribution of seabirds in near-shore waters we demonstrate the potential of this approach in marine planning and decision making.

  17. Predictive Modelling to Identify Near-Shore, Fine-Scale Seabird Distributions during the Breeding Season.

    Directory of Open Access Journals (Sweden)

    Victoria C Warwick-Evans

    Full Text Available During the breeding season seabirds are constrained to coastal areas and are restricted in their movements, spending much of their time in near-shore waters either loafing or foraging. However, in using these areas they may be threatened by anthropogenic activities such as fishing, watersports and coastal developments including marine renewable energy installations. Although many studies describe large scale interactions between seabirds and the environment, the drivers behind near-shore, fine-scale distributions are not well understood. For example, Alderney is an important breeding ground for many species of seabird and has a diversity of human uses of the marine environment, thus providing an ideal location to investigate the near-shore fine-scale interactions between seabirds and the environment. We used vantage point observations of seabird distribution, collected during the 2013 breeding season in order to identify and quantify some of the environmental variables affecting the near-shore, fine-scale distribution of seabirds in Alderney's coastal waters. We validate the models with observation data collected in 2014 and show that water depth, distance to the intertidal zone, and distance to the nearest seabird nest are key predictors in the distribution of Alderney's seabirds. AUC values for each species suggest that these models perform well, although the model for shags performed better than those for auks and gulls. While further unexplained underlying localised variation in the environmental conditions will undoubtedly effect the fine-scale distribution of seabirds in near-shore waters we demonstrate the potential of this approach in marine planning and decision making.

  18. Adaptive-network models of collective dynamics

    Science.gov (United States)

    Zschaler, G.

    2012-09-01

    Complex systems can often be modelled as networks, in which their basic units are represented by abstract nodes and the interactions among them by abstract links. This network of interactions is the key to understanding emergent collective phenomena in such systems. In most cases, it is an adaptive network, which is defined by a feedback loop between the local dynamics of the individual units and the dynamical changes of the network structure itself. This feedback loop gives rise to many novel phenomena. Adaptive networks are a promising concept for the investigation of collective phenomena in different systems. However, they also present a challenge to existing modelling approaches and analytical descriptions due to the tight coupling between local and topological degrees of freedom. In this work, which is essentially my PhD thesis, I present a simple rule-based framework for the investigation of adaptive networks, using which a wide range of collective phenomena can be modelled and analysed from a common perspective. In this framework, a microscopic model is defined by the local interaction rules of small network motifs, which can be implemented in stochastic simulations straightforwardly. Moreover, an approximate emergent-level description in terms of macroscopic variables can be derived from the microscopic rules, which we use to analyse the system's collective and long-term behaviour by applying tools from dynamical systems theory. We discuss three adaptive-network models for different collective phenomena within our common framework. First, we propose a novel approach to collective motion in insect swarms, in which we consider the insects' adaptive interaction network instead of explicitly tracking their positions and velocities. We capture the experimentally observed onset of collective motion qualitatively in terms of a bifurcation in this non-spatial model. We find that three-body interactions are an essential ingredient for collective motion to emerge

  19. One-scale supersymmetric inflationary models

    International Nuclear Information System (INIS)

    Bertolami, O.; Ross, G.G.

    1986-01-01

    The reheating phase is studied in a class of supergravity inflationary models involving a two-component hidden sector in which the scale of supersymmetry breaking and the scale generating inflation are related. It is shown that these models have an ''entropy crisis'' in which there is a large entropy release after nucleosynthesis leading to unacceptable low nuclear abundances. (orig.)

  20. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...

  1. A statistical method for model extraction and model selection applied to the temperature scaling of the L–H transition

    International Nuclear Information System (INIS)

    Peluso, E; Gelfusa, M; Gaudio, P; Murari, A

    2014-01-01

    Access to the H mode of confinement in tokamaks is characterized by an abrupt transition, which has been the subject of continuous investigation for decades. Various theoretical models have been developed and multi-machine databases of experimental data have been collected. In this paper, a new methodology is reviewed for the investigation of the scaling laws for the temperature threshold to access the H mode. The approach is based on symbolic regression via genetic programming and allows first the extraction of the most statistically reliable models from the available experimental data. Nonlinear fitting is then applied to the mathematical expressions found by symbolic regression; this second step permits to easily compare the quality of the data-driven scalings with the most widely accepted theoretical models. The application of a complete set of statistical indicators shows that the data-driven scaling laws are qualitatively better than the theoretical models. The main limitations of the theoretical models are that they are all expressed as power laws, which are too rigid to fit the available experimental data and to extrapolate to ITER. The proposed method is absolutely general and can be applied to the extraction or scaling law from any experimental database of sufficient statistical relevance. (paper)

  2. Modelling collective cell migration of neural crest.

    Science.gov (United States)

    Szabó, András; Mayor, Roberto

    2016-10-01

    Collective cell migration has emerged in the recent decade as an important phenomenon in cell and developmental biology and can be defined as the coordinated and cooperative movement of groups of cells. Most studies concentrate on tightly connected epithelial tissues, even though collective migration does not require a constant physical contact. Movement of mesenchymal cells is more independent, making their emergent collective behaviour less intuitive and therefore lending importance to computational modelling. Here we focus on such modelling efforts that aim to understand the collective migration of neural crest cells, a mesenchymal embryonic population that migrates large distances as a group during early vertebrate development. By comparing different models of neural crest migration, we emphasize the similarity and complementary nature of these approaches and suggest a future direction for the field. The principles derived from neural crest modelling could aid understanding the collective migration of other mesenchymal cell types. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Drift-Scale THC Seepage Model

    International Nuclear Information System (INIS)

    C.R. Bryan

    2005-01-01

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC submodel uses a drift-scale

  4. Multi-scale modeling for sustainable chemical production.

    Science.gov (United States)

    Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J

    2013-09-01

    With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Debt Collection Models and Their Using in Practice

    Directory of Open Access Journals (Sweden)

    Anna Wodyńska

    2007-01-01

    Full Text Available An important element of a companys credit policy is its attitude to collecting due receivables. A company tries to establish the rules of collecting the said amount within the standards that are applied in a company. Depending on the organizational structure of a company, its scope of activity, common practices and, in particular, the credit policy assumed by a company, enterprises use internal, external or mixed debt collection models. Internal debt collection model assumes conducting debt collection activities within the organizational structure of a creditor company. External debt collection consists of ordering debt collection activities at a specialised company that handles debt service (outsourcing, which is connected with acting on behalf and account of the ordering party, but it also consists of receivables trading. The choice of proper debt collection model is not easy, due to, among others, high costs of the process as well as necessary expertise knowledge in the said scope; and the products offered on the market, although they seem similar, do differ substantially among one another. Regardless of the debt collection model, it shall be remembered that debt collection shall be run in a manner that ensures consolidation of entrepreneurs good name and their market position. The debt collection procedure binding in a company shall serve to work out a cooperation model with clients that are based on buyers reliability.

  6. Modeling and simulation of blood collection systems.

    Science.gov (United States)

    Alfonso, Edgar; Xie, Xiaolan; Augusto, Vincent; Garraud, Olivier

    2012-03-01

    This paper addresses the modeling and simulation of blood collection systems in France for both fixed site and mobile blood collection with walk in whole blood donors and scheduled plasma and platelet donors. Petri net models are first proposed to precisely describe different blood collection processes, donor behaviors, their material/human resource requirements and relevant regulations. Petri net models are then enriched with quantitative modeling of donor arrivals, donor behaviors, activity times and resource capacity. Relevant performance indicators are defined. The resulting simulation models can be straightforwardly implemented with any simulation language. Numerical experiments are performed to show how the simulation models can be used to select, for different walk in donor arrival patterns, appropriate human resource planning and donor appointment strategies.

  7. Authentication and Interpretation of Weight Data Collected from Accountability Scales at Global Nuclear Fuels

    International Nuclear Information System (INIS)

    Fitzgerald, Peter; Laughter, Mark D.; Martyn, Rose; Richardson, Dave; Rowe, Nathan C.; Pickett, Chris A.; Younkin, James R.; Shephard, Adam M.

    2010-01-01

    Accountability scale data from the Global Nuclear Fuels (GNF) fuel fabrication facility in Wilmington, NC has been collected and analyzed as a part of the Cylinder Accountability and Tracking System (CATS) field trial in 2009. The purpose of the data collection was to demonstrate an authentication method for safeguards applications, and the use of load cell data in cylinder accountability. The scale data was acquired using a commercial off-the-shelf communication server with authentication and encryption capabilities. The authenticated weight data was then analyzed to determine facility operating activities. The data allowed for the determination of the number of full and empty cylinders weighed and the respective weights along with other operational activities. Data authentication concepts, practices and methods, the details of the GNF weight data authentication implementation and scale data interpretation results will be presented.

  8. Scale modelling in LMFBR safety

    International Nuclear Information System (INIS)

    Cagliostro, D.J.; Florence, A.L.; Abrahamson, G.R.

    1979-01-01

    This paper reviews scale modelling techniques used in studying the structural response of LMFBR vessels to HCDA loads. The geometric, material, and dynamic similarity parameters are presented and identified using the methods of dimensional analysis. Complete similarity of the structural response requires that each similarity parameter be the same in the model as in the prototype. The paper then focuses on the methods, limitations, and problems of duplicating these parameters in scale models and mentions an experimental technique for verifying the scaling. Geometric similarity requires that all linear dimensions of the prototype be reduced in proportion to the ratio of a characteristic dimension of the model to that of the prototype. The overall size of the model depends on the structural detail required, the size of instrumentation, and the costs of machining and assemblying the model. Material similarity requires that the ratio of the density, bulk modulus, and constitutive relations for the structure and fluid be the same in the model as in the prototype. A practical choice of a material for the model is one with the same density and stress-strain relationship as the operating temperature. Ni-200 and water are good simulant materials for the 304 SS vessel and the liquid sodium coolant, respectively. Scaling of the strain rate sensitivity and fracture toughness of materials is very difficult, but may not be required if these effects do not influence the structural response of the reactor components. Dynamic similarity requires that the characteristic pressure of a simulant source equal that of the prototype HCDA for geometrically similar volume changes. The energy source is calibrated in the geometry and environment in which it will be used to assure that heat transfer between high temperature loading sources and the coolant simulant and that non-equilibrium effects in two-phase sources are accounted for. For the geometry and flow conitions of interest, the

  9. Global scale groundwater flow model

    Science.gov (United States)

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  10. Collective Influence of Multiple Spreaders Evaluated by Tracing Real Information Flow in Large-Scale Social Networks.

    Science.gov (United States)

    Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A

    2016-10-26

    Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called "Collective Influence (CI)" has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes' significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct "virtual" information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes' importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community.

  11. On scaling of human body models

    Directory of Open Access Journals (Sweden)

    Hynčík L.

    2007-10-01

    Full Text Available Human body is not an unique being, everyone is another from the point of view of anthropometry and mechanical characteristics which means that division of the human body population to categories like 5%-tile, 50%-tile and 95%-tile from the application point of view is not enough. On the other hand, the development of a particular human body model for all of us is not possible. That is why scaling and morphing algorithms has started to be developed. The current work describes the development of a tool for scaling of the human models. The idea is to have one (or couple of standard model(s as a base and to create other models based on these basic models. One has to choose adequate anthropometrical and biomechanical parameters that describe given group of humans to be scaled and morphed among.

  12. Strange star candidates revised within a quark model with chiral mass scaling

    Institute of Scientific and Technical Information of China (English)

    Ang Li; Guang-Xiong Peng; Ju-Fu Lu

    2011-01-01

    We calculate the properties of static strange stars using a quark model with chiral mass scaling. The results are characterized by a large maximum mass (~ 1.6 M⊙) and radius (~ 10 km). Together with a broad collection of modern neutron star models, we discuss some recent astrophysical observational data that could shed new light on the possible presence of strange quark matter in compact stars. We conclude that none of the present astrophysical observations can prove or confute the existence of strange stars.

  13. Drift-Scale THC Seepage Model

    Energy Technology Data Exchange (ETDEWEB)

    C.R. Bryan

    2005-02-17

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral

  14. Locust Collective Motion and Its Modeling.

    Directory of Open Access Journals (Sweden)

    Gil Ariel

    2015-12-01

    Full Text Available Over the past decade, technological advances in experimental and animal tracking techniques have motivated a renewed theoretical interest in animal collective motion and, in particular, locust swarming. This review offers a comprehensive biological background followed by comparative analysis of recent models of locust collective motion, in particular locust marching, their settings, and underlying assumptions. We describe a wide range of recent modeling and simulation approaches, from discrete agent-based models of self-propelled particles to continuous models of integro-differential equations, aimed at describing and analyzing the fascinating phenomenon of locust collective motion. These modeling efforts have a dual role: The first views locusts as a quintessential example of animal collective motion. As such, they aim at abstraction and coarse-graining, often utilizing the tools of statistical physics. The second, which originates from a more biological perspective, views locust swarming as a scientific problem of its own exceptional merit. The main goal should, thus, be the analysis and prediction of natural swarm dynamics. We discuss the properties of swarm dynamics using the tools of statistical physics, as well as the implications for laboratory experiments and natural swarms. Finally, we stress the importance of a combined-interdisciplinary, biological-theoretical effort in successfully confronting the challenges that locusts pose at both the theoretical and practical levels.

  15. Modeling collective cell migration in geometric confinement

    Science.gov (United States)

    Tarle, Victoria; Gauquelin, Estelle; Vedula, S. R. K.; D'Alessandro, Joseph; Lim, C. T.; Ladoux, Benoit; Gov, Nir S.

    2017-06-01

    Monolayer expansion has generated great interest as a model system to study collective cell migration. During such an expansion the culture front often develops ‘fingers’, which we have recently modeled using a proposed feedback between the curvature of the monolayer’s leading edge and the outward motility of the edge cells. We show that this model is able to explain the puzzling observed increase of collective cellular migration speed of a monolayer expanding into thin stripes, as well as describe the behavior within different confining geometries that were recently observed in experiments. These comparisons give support to the model and emphasize the role played by the edge cells and the edge shape during collective cell motion.

  16. Downscaling modelling system for multi-scale air quality forecasting

    Science.gov (United States)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a

  17. Toward Multi-scale Modeling and simulation of conduction in heterogeneous materials

    Energy Technology Data Exchange (ETDEWEB)

    Lechman, Jeremy B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Bolintineanu, Dan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Cooper, Marcia A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Erikson, William W. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Foiles, Stephen M. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Kay, Jeffrey J [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Phinney, Leslie M. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Piekos, Edward S. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Specht, Paul Elliott [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Wixom, Ryan R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Yarrington, Cole [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    This report summarizes a project in which the authors sought to develop and deploy: (i) experimental techniques to elucidate the complex, multiscale nature of thermal transport in particle-based materials; and (ii) modeling approaches to address current challenges in predicting performance variability of materials (e.g., identifying and characterizing physical- chemical processes and their couplings across multiple length and time scales, modeling information transfer between scales, and statically and dynamically resolving material structure and its evolution during manufacturing and device performance). Experimentally, several capabilities were successfully advanced. As discussed in Chapter 2 a flash diffusivity capability for measuring homogeneous thermal conductivity of pyrotechnic powders (and beyond) was advanced; leading to enhanced characterization of pyrotechnic materials and properties impacting component development. Chapter 4 describes success for the first time, although preliminary, in resolving thermal fields at speeds and spatial scales relevant to energetic components. Chapter 7 summarizes the first ever (as far as the authors know) application of TDTR to actual pyrotechnic materials. This is the first attempt to actually characterize these materials at the interfacial scale. On the modeling side, new capabilities in image processing of experimental microstructures and direct numerical simulation on complicated structures were advanced (see Chapters 3 and 5). In addition, modeling work described in Chapter 8 led to improved prediction of interface thermal conductance from first principles calculations. Toward the second point, for a model system of packed particles, significant headway was made in implementing numerical algorithms and collecting data to justify the approach in terms of highlighting the phenomena at play and pointing the way forward in developing and informing the kind of modeling approach originally envisioned (see Chapter 6). In

  18. Multi-scale Modeling of Arctic Clouds

    Science.gov (United States)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  19. Constructing Multidatabase Collections Using Extended ODMG Object Model

    Directory of Open Access Journals (Sweden)

    Adrian Skehill Mark Roantree

    1999-11-01

    Full Text Available Collections are an important feature in database systems. They provide us with the ability to group objects of interest together, and then to manipulate them in the required fashion. The OASIS project is focused on the construction a multidatabase prototype which uses the ODMG model and a canonical model. As part of this work we have extended the base model to provide a more powerful collection mechanism, and to permit the construction of a federated collection, a collection of heterogenous objects taken from distributed data sources

  20. Design of scaled down structural models

    Science.gov (United States)

    Simitses, George J.

    1994-07-01

    In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.

  1. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  2. Collective action and technology development: up-scaling of innovation in rice farming communities in Northern Thailand

    NARCIS (Netherlands)

    Limnirankul, B.

    2007-01-01

    Keywords:small-scale rice farmers, collective action, community rice seed, local innovations, green manure crop, contract farming, participatory technology development, up-scaling, technological configuration, grid-group theory,

  3. Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    G. Zyvoloski

    2003-01-01

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca Mountain using FEHM V 2.20 are being

  4. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  5. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM) MODELS

    International Nuclear Information System (INIS)

    Y.S. Wu

    2005-01-01

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on water and gas

  6. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Y.S. Wu

    2005-08-24

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on

  7. Collective Influence of Multiple Spreaders Evaluated by Tracing Real Information Flow in Large-Scale Social Networks

    Science.gov (United States)

    Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A.

    2016-01-01

    Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called “Collective Influence (CI)” has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes’ significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct “virtual” information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes’ importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community. PMID:27782207

  8. Critical analysis of algebraic collective models

    International Nuclear Information System (INIS)

    Moshinsky, M.

    1986-01-01

    The author shall understand by algebraic collective models all those based on specific Lie algebras, whether the latter are suggested through simple shell model considerations like in the case of the Interacting Boson Approximation (IBA), or have a detailed microscopic foundation like the symplectic model. To analyze these models critically, it is convenient to take a simple conceptual example of them in which all steps can be implemented analytically or through elementary numerical analysis. In this note he takes as an example the symplectic model in a two dimensional space i.e. based on a sp(4,R) Lie algebra, and show how through its complete discussion we can get a clearer understanding of the structure of algebraic collective models of nuclei. In particular he discusses the association of Hamiltonians, related to maximal subalgebras of our basic Lie algebra, with specific types of spectra, and the connections between spectra and shapes

  9. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.

    2015-01-01

    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  10. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  11. Evaluation of a plot-scale methane emission model using eddy covariance observations and footprint modelling

    Directory of Open Access Journals (Sweden)

    A. Budishchev

    2014-09-01

    Full Text Available Most plot-scale methane emission models – of which many have been developed in the recent past – are validated using data collected with the closed-chamber technique. This method, however, suffers from a low spatial representativeness and a poor temporal resolution. Also, during a chamber-flux measurement the air within a chamber is separated from the ambient atmosphere, which negates the influence of wind on emissions. Additionally, some methane models are validated by upscaling fluxes based on the area-weighted averages of modelled fluxes, and by comparing those to the eddy covariance (EC flux. This technique is rather inaccurate, as the area of upscaling might be different from the EC tower footprint, therefore introducing significant mismatch. In this study, we present an approach to validate plot-scale methane models with EC observations using the footprint-weighted average method. Our results show that the fluxes obtained by the footprint-weighted average method are of the same magnitude as the EC flux. More importantly, the temporal dynamics of the EC flux on a daily timescale are also captured (r2 = 0.7. In contrast, using the area-weighted average method yielded a low (r2 = 0.14 correlation with the EC measurements. This shows that the footprint-weighted average method is preferable when validating methane emission models with EC fluxes for areas with a heterogeneous and irregular vegetation pattern.

  12. A perspective on bridging scales and design of models using low-dimensional manifolds and data-driven model inference

    KAUST Repository

    Tegner, Jesper; Zenil, Hector; Kiani, Narsis A.; Ball, Gordon; Gomez-Cabrero, David

    2016-01-01

    Systems in nature capable of collective behaviour are nonlinear, operating across several scales. Yet our ability to account for their collective dynamics differs in physics, chemistry and biology. Here, we briefly review the similarities and differences between mathematical modelling of adaptive living systems versus physico-chemical systems. We find that physics-based chemistry modelling and computational neuroscience have a shared interest in developing techniques for model reductions aiming at the identification of a reduced subsystem or slow manifold, capturing the effective dynamics. By contrast, as relations and kinetics between biological molecules are less characterized, current quantitative analysis under the umbrella of bioinformatics focuses on signal extraction, correlation, regression and machine-learning analysis. We argue that model reduction analysis and the ensuing identification of manifolds bridges physics and biology. Furthermore, modelling living systems presents deep challenges as how to reconcile rich molecular data with inherent modelling uncertainties (formalism, variables selection and model parameters). We anticipate a new generative data-driven modelling paradigm constrained by identified governing principles extracted from low-dimensional manifold analysis. The rise of a new generation of models will ultimately connect biology to quantitative mechanistic descriptions, thereby setting the stage for investigating the character of the model language and principles driving living systems.

  13. A perspective on bridging scales and design of models using low-dimensional manifolds and data-driven model inference

    KAUST Repository

    Tegner, Jesper

    2016-10-04

    Systems in nature capable of collective behaviour are nonlinear, operating across several scales. Yet our ability to account for their collective dynamics differs in physics, chemistry and biology. Here, we briefly review the similarities and differences between mathematical modelling of adaptive living systems versus physico-chemical systems. We find that physics-based chemistry modelling and computational neuroscience have a shared interest in developing techniques for model reductions aiming at the identification of a reduced subsystem or slow manifold, capturing the effective dynamics. By contrast, as relations and kinetics between biological molecules are less characterized, current quantitative analysis under the umbrella of bioinformatics focuses on signal extraction, correlation, regression and machine-learning analysis. We argue that model reduction analysis and the ensuing identification of manifolds bridges physics and biology. Furthermore, modelling living systems presents deep challenges as how to reconcile rich molecular data with inherent modelling uncertainties (formalism, variables selection and model parameters). We anticipate a new generative data-driven modelling paradigm constrained by identified governing principles extracted from low-dimensional manifold analysis. The rise of a new generation of models will ultimately connect biology to quantitative mechanistic descriptions, thereby setting the stage for investigating the character of the model language and principles driving living systems.

  14. On the random cascading model study of anomalous scaling in multiparticle production with continuously diminishing scale

    International Nuclear Information System (INIS)

    Liu Lianshou; Zhang Yang; Wu Yuanfang

    1996-01-01

    The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)

  15. Analysis of effectiveness of possible queuing models at gas stations using the large-scale queuing theory

    Directory of Open Access Journals (Sweden)

    Slaviša M. Ilić

    2011-10-01

    Full Text Available This paper analyzes the effectiveness of possible models for queuing at gas stations, using a mathematical model of the large-scale queuing theory. Based on actual data collected and the statistical analysis of the expected intensity of vehicle arrivals and queuing at gas stations, the mathematical modeling of the real process of queuing was carried out and certain parameters quantified, in terms of perception of the weaknesses of the existing models and the possible benefits of an automated queuing model.

  16. Multi-scale modeling for sustainable chemical production

    DEFF Research Database (Denmark)

    Zhuang, Kai; Bakshi, Bhavik R.; Herrgard, Markus

    2013-01-01

    associated with the development and implementation of a su stainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow......With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes...... models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process...

  17. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  18. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  19. A rate-dependent multi-scale crack model for concrete

    NARCIS (Netherlands)

    Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.

    2013-01-01

    A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate

  20. The generalized collective model

    International Nuclear Information System (INIS)

    Troltenier, D.

    1992-07-01

    In this thesis a new way of proceeding, basing on the method of the finite elements, for the solution of the collective Schroedinger equation in the framework of the Generalized Collective Model was presented. The numerically reachable accuracy was illustrated by the comparison to analytically known solutions by means of numerous examples. Furthermore the potential-energy surfaces of the 182-196 Hg, 242-248 Cm, and 242-246 Pu isotopes were determined by the fitting of the parameters of the Gneuss-Greiner potential to the experimental data. In the Hg isotopes a shape consistency of nearly spherical and oblate deformations is shown, while the Cm and Pu isotopes possess an essentially equal remaining prolate deformation. By means of the pseudo-symplectic model the potential-energy surfaces of 24 Mg, 190 Pt, and 238 U were microscopically calculated. Using a deformation-independent kinetic energy so the collective excitation spectra and the electrical properties (B(E2), B(E4) values, quadrupole moments) of these nuclei were calculated and compared with the experiment. Finally an analytic relation between the (g R -Z/A) value and the quadrupole moment was derived. The study of the experimental data of the 166-170 Er isotopes shows an in the framework of the measurement accuracy a sufficient agreement with this relation. Furthermore it is by this relation possible to determine the effective magnetic dipole moment parameter-freely. (orig./HSI) [de

  1. Magnetic hysteresis at the domain scale of a multi-scale material model for magneto-elastic behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Vanoost, D., E-mail: dries.vanoost@kuleuven-kulak.be [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); Steentjes, S. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany); Peuteman, J. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Electrical Energy and Computer Architecture, Heverlee B-3001 (Belgium); Gielen, G. [KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); De Gersem, H. [KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); TU Darmstadt, Institut für Theorie Elektromagnetischer Felder, Darmstadt D-64289 (Germany); Pissoort, D. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); Hameyer, K. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany)

    2016-09-15

    This paper proposes a multi-scale energy-based material model for poly-crystalline materials. Describing the behaviour of poly-crystalline materials at three spatial scales of dominating physical mechanisms allows accounting for the heterogeneity and multi-axiality of the material behaviour. The three spatial scales are the poly-crystalline, grain and domain scale. Together with appropriate scale transitions rules and models for local magnetic behaviour at each scale, the model is able to describe the magneto-elastic behaviour (magnetostriction and hysteresis) at the macroscale, although the data input is merely based on a set of physical constants. Introducing a new energy density function that describes the demagnetisation field, the anhysteretic multi-scale energy-based material model is extended to the hysteretic case. The hysteresis behaviour is included at the domain scale according to the micro-magnetic domain theory while preserving a valid description for the magneto-elastic coupling. The model is verified using existing measurement data for different mechanical stress levels. - Highlights: • A ferromagnetic hysteretic energy-based multi-scale material model is proposed. • The hysteresis is obtained by new proposed hysteresis energy density function. • Avoids tedious parameter identification.

  2. Nonpointlike-parton model with asymptotic scaling and with scaling violationat moderate Q2 values

    International Nuclear Information System (INIS)

    Chen, C.K.

    1981-01-01

    A nonpointlike-parton model is formulated on the basis of the assumption of energy-independent total cross sections of partons and the current-algebra sum rules. No specific strong-interaction Lagrangian density is introduced in this approach. This model predicts asymptotic scaling for the inelastic structure functions of nucleons on the one hand and scaling violation at moderate Q 2 values on the other hand. The predicted scaling-violation patterns at moderate Q 2 values are consistent with the observed scaling-violation patterns. A numerical fit of F 2 functions is performed in order to demonstrate that the predicted scaling-violation patterns of this model at moderate Q 2 values fit the data, and to see how the predicted asymptotic scaling behavior sets in at various x values. Explicit analytic forms of F 2 functions are obtained from this numerical fit, and are compared in detail with the analytic forms of F 2 functions obtained from the numerical fit of the quantum-chromodynamics (QCD) parton model. This comparison shows that this nonpointlike-parton model fits the data better than the QCD parton model, especially at large and small x values. Nachtman moments are computed from the F 2 functions of this model and are shown to agree with data well. It is also shown that the two-dimensional plot of the logarithm of a nonsinglet moment versus the logarithm of another such moment is not a good way to distinguish this nonpointlike-parton model from the QCD parton model

  3. Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.

    Science.gov (United States)

    Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric

    2016-04-01

    SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.

  4. Comments on intermediate-scale models

    International Nuclear Information System (INIS)

    Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.

    1987-01-01

    Some superstring-inspired models employ intermediate scales m I of gauge symmetry breaking. Such scales should exceed 10 16 GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m I . However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m W ), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m I . We also comment on the danger of baryon and lepton number violation in the effective low-energy theory. (orig.)

  5. Health Belief Model Scale for Human Papilloma Virus and its Vaccination: Adaptation and Psychometric Testing.

    Science.gov (United States)

    Guvenc, Gulten; Seven, Memnun; Akyuz, Aygul

    2016-06-01

    To adapt and psychometrically test the Health Belief Model Scale for Human Papilloma Virus (HPV) and Its Vaccination (HBMS-HPVV) for use in a Turkish population and to assess the Human Papilloma Virus Knowledge score (HPV-KS) among female college students. Instrument adaptation and psychometric testing study. The sample consisted of 302 nursing students at a nursing school in Turkey between April and May 2013. Questionnaire-based data were collected from the participants. Information regarding HBMS-HPVV and HPV knowledge and descriptive characteristic of participants was collected using translated HBMS-HPVV and HPV-KS. Test-retest reliability was evaluated and Cronbach α was used to assess internal consistency reliability, and exploratory factor analysis was used to assess construct validity of the HBMS-HPVV. The scale consists of 4 subscales that measure 4 constructs of the Health Belief Model covering the perceived susceptibility and severity of HPV and the benefits and barriers. The final 14-item scale had satisfactory validity and internal consistency. Cronbach α values for the 4 subscales ranged from 0.71 to 0.78. Total HPV-KS ranged from 0 to 8 (scale range, 0-10; 3.80 ± 2.12). The HBMS-HPVV is a valid and reliable instrument for measuring young Turkish women's beliefs and attitudes about HPV and its vaccination. Copyright © 2015 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.

  6. The collective model of nuclei and its applications

    International Nuclear Information System (INIS)

    Frank H, A.; Castanos G, O.H.

    1975-01-01

    The concepts of collective coordinates, the establishment of Hamiltonian collectives through the model of the drop of liquid or through the symmetry arguments and of the operators in these variables are discussed in this study. The passage of the laboratory system to the principal axis system is discussed thoroughly with the symmetries produced by this transformation, considering a drop in two dimensions. It is also observed that the deformed nuclei have some properties that can be described through the rotation-vibration and symmetric rotor models. The rotation-vibration model concerns the nuclei with axially symmetric deformations in the basic state and its importance is due to the fact that it can predict the nuclear spectrum at low energies. The asymmetric rotor model assumes the existence of triaxial nuclei and considers their collective movements. This model can be modified taking into consideration that vibrations β can also appear. Finally there is a comparison between the two models and the models are also compared with the experiment. (author)

  7. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  8. Emergent collective decision-making: Control, model and behavior

    Science.gov (United States)

    Shen, Tian

    In this dissertation we study emergent collective decision-making in social groups with time-varying interactions and heterogeneously informed individuals. First we analyze a nonlinear dynamical systems model motivated by animal collective motion with heterogeneously informed subpopulations, to examine the role of uninformed individuals. We find through formal analysis that adding uninformed individuals in a group increases the likelihood of a collective decision. Secondly, we propose a model for human shared decision-making with continuous-time feedback and where individuals have little information about the true preferences of other group members. We study model equilibria using bifurcation analysis to understand how the model predicts decisions based on the critical threshold parameters that represent an individual's tradeoff between social and environmental influences. Thirdly, we analyze continuous-time data of pairs of human subjects performing an experimental shared tracking task using our second proposed model in order to understand transient behavior and the decision-making process. We fit the model to data and show that it reproduces a wide range of human behaviors surprisingly well, suggesting that the model may have captured the mechanisms of observed behaviors. Finally, we study human behavior from a game-theoretic perspective by modeling the aforementioned tracking task as a repeated game with incomplete information. We show that the majority of the players are able to converge to playing Nash equilibrium strategies. We then suggest with simulations that the mean field evolution of strategies in the population resemble replicator dynamics, indicating that the individual strategies may be myopic. Decisions form the basis of control and problems involving deciding collectively between alternatives are ubiquitous in nature and in engineering. Understanding how multi-agent systems make decisions among alternatives also provides insight for designing

  9. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    Science.gov (United States)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  10. TDA's validity to study 18O collectivity in terms of collective pair model

    International Nuclear Information System (INIS)

    Gao Yuanyi; Vitturi, A.; Catara, F.; Sambataro, M.

    1991-01-01

    Conclusion proved that if the authors calculate 18 O collective spectra in terms of the Collective Pair Model, the authors can get the positive low laying levels of 18 O which are of the particle particle pair, independent on the excitation of hole within closed shell. 1 - low laying levels are of non-collective 3 particle 1 hole states. 1 - fourth level is of collective 3 particle 1 hole states. 3 - low laying levels are of collective 3 particle 1 hole states. 1 - , 3 - low laying levels agree very well with the experiment data. Hence the TDA is sufficient for the calculations of 1 - ,3 - collective low levels of 18 O

  11. Collective synchronization of self/non-self discrimination in T cell activation, across multiple spatio-temporal scales

    Science.gov (United States)

    Altan-Bonnet, Gregoire

    The immune system is a collection of cells whose function is to eradicate pathogenic infections and malignant tumors while protecting healthy tissues. Recent work has delineated key molecular and cellular mechanisms associated with the ability to discriminate self from non-self agents. For example, structural studies have quantified the biophysical characteristics of antigenic molecules (those prone to trigger lymphocyte activation and a subsequent immune response). However, such molecular mechanisms were found to be highly unreliable at the individual cellular level. We will present recent efforts to build experimentally validated computational models of the immune responses at the collective cell level. Such models have become critical to delineate how higher-level integration through nonlinear amplification in signal transduction, dynamic feedback in lymphocyte differentiation and cell-to-cell communication allows the immune system to enforce reliable self/non-self discrimination at the organism level. In particular, we will present recent results demonstrating how T cells tune their antigen discrimination according to cytokine cues, and how competition for cytokine within polyclonal populations of cells shape the repertoire of responding clones. Additionally, we will present recent theoretical and experimental results demonstrating how competition between diffusion and consumption of cytokines determine the range of cell-cell communications within lymphoid organs. Finally, we will discuss how biochemically explicit models, combined with quantitative experimental validation, unravel the relevance of new feedbacks for immune regulations across multiple spatial and temporal scales.

  12. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  13. Quadrupole collective dynamics from energy density functionals: Collective Hamiltonian and the interacting boson model

    International Nuclear Information System (INIS)

    Nomura, K.; Vretenar, D.; Niksic, T.; Otsuka, T.; Shimizu, N.

    2011-01-01

    Microscopic energy density functionals have become a standard tool for nuclear structure calculations, providing an accurate global description of nuclear ground states and collective excitations. For spectroscopic applications, this framework has to be extended to account for collective correlations related to restoration of symmetries broken by the static mean field, and for fluctuations of collective variables. In this paper, we compare two approaches to five-dimensional quadrupole dynamics: the collective Hamiltonian for quadrupole vibrations and rotations and the interacting boson model (IBM). The two models are compared in a study of the evolution of nonaxial shapes in Pt isotopes. Starting from the binding energy surfaces of 192,194,196 Pt, calculated with a microscopic energy density functional, we analyze the resulting low-energy collective spectra obtained from the collective Hamiltonian, and the corresponding IBM Hamiltonian. The calculated excitation spectra and transition probabilities for the ground-state bands and the γ-vibration bands are compared to the corresponding sequences of experimental states.

  14. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  15. Land-Atmosphere Coupling in the Multi-Scale Modelling Framework

    Science.gov (United States)

    Kraus, P. M.; Denning, S.

    2015-12-01

    The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced

  16. Mob control models of threshold collective behavior

    CERN Document Server

    Breer, Vladimir V; Rogatkin, Andrey D

    2017-01-01

    This book presents mathematical models of mob control with threshold (conformity) collective decision-making of the agents. Based on the results of analysis of the interconnection between the micro- and macromodels of active network structures, it considers the static (deterministic, stochastic and game-theoretic) and dynamic (discrete- and continuous-time) models of mob control, and highlights models of informational confrontation. Many of the results are applicable not only to mob control problems, but also to control problems arising in social groups, online social networks, etc. Aimed at researchers and practitioners, it is also a valuable resource for undergraduate and postgraduate students as well as doctoral candidates specializing in the field of collective behavior modeling.

  17. Transdisciplinary application of the cross-scale resilience model

    Science.gov (United States)

    Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.

    2014-01-01

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.

  18. Comments on intermediate-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.

    1987-04-23

    Some superstring-inspired models employ intermediate scales m/sub I/ of gauge symmetry breaking. Such scales should exceed 10/sup 16/ GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m/sub I/. However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m/sub W/), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m/sub I/. We also comment on the danger of baryon and lepton number violation in the effective low-energy theory.

  19. Development of inspection data collection and evaluation system for large scale MOX fuel fabrication plant safeguards (3)

    International Nuclear Information System (INIS)

    Kumakura, Shinichi; Masuda, Shoichiro; Iso, Shoko; Hisamatsu, Yoshinori; Kurobe, Hiroko; Nakajima, Shinji

    2015-01-01

    Inspection Data Collection and Evaluation System is the system to store inspection data and operator declaration data collected from various measurement equipment, which is installed in fuel fabrication processes of the large-scale MOX fuel fabrication plant, and to make safeguards evaluation based on Near Real Time Accountancy (NRTA) using these data. Nuclear Material Control Center developed the simulator to simulate fuel fabrication process, in-process material inventory/flow data and the measurement data and the adequacy/impact to the uncertainty of the material balance using the simulation results, such as the facility operation and the operational status, has been reviewed. Following the 34th INMM Japan chapter presentation, the model similar to the real nuclear material accountancy during the fuel fabrication process was simulated and the nuclear material accountancy and its uncertainty (Sigma MUF) have been reviewed. Some findings have been obtained, such as regarding evaluation related indicators for verification under a more realistic accountancy which could be applied by operator. (author)

  20. Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications

    DEFF Research Database (Denmark)

    Liu, Yubao; Warner, Tom; Liu, Yuewei

    2011-01-01

    This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center f...

  1. Establishing a coherent and replicable measurement model of the Edinburgh Postnatal Depression Scale.

    Science.gov (United States)

    Martin, Colin R; Redshaw, Maggie

    2018-06-01

    The 10-item Edinburgh Postnatal Depression Scale (EPDS) is an established screening tool for postnatal depression. Inconsistent findings in factor structure and replication difficulties have limited the scope of development of the measure as a multi-dimensional tool. The current investigation sought to robustly determine the underlying factor structure of the EPDS and the replicability and stability of the most plausible model identified. A between-subjects design was used. EPDS data were collected postpartum from two independent cohorts using identical data capture methods. Datasets were examined with confirmatory factor analysis, model invariance testing and systematic evaluation of relational and internal aspects of the measure. Participants were two samples of postpartum women in England assessed at three months (n = 245) and six months (n = 217). The findings showed a three-factor seven-item model of the EPDS offered an excellent fit to the data, and was observed to be replicable in both datasets and invariant as a function of time point of assessment. Some EPDS sub-scale scores were significantly higher at six months. The EPDS is multi-dimensional and a robust measurement model comprises three factors that are replicable. The potential utility of the sub-scale components identified requires further research to identify a role in contemporary screening practice. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  2. The Scaling LInear Macroweather model (SLIM): using scaling to forecast global scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-03-01

    At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare

  3. Analysis of chromosome aberration data by hybrid-scale models

    International Nuclear Information System (INIS)

    Indrawati, Iwiq; Kumazawa, Shigeru

    2000-02-01

    This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)

  4. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  5. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

    CSIR Research Space (South Africa)

    Engelbrecht, FA

    2011-12-01

    Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...

  6. Modeling and simulation of large scale stirred tank

    Science.gov (United States)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  7. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  8. Model of Collective Fish Behavior with Hydrodynamic Interactions

    Science.gov (United States)

    Filella, Audrey; Nadal, François; Sire, Clément; Kanso, Eva; Eloy, Christophe

    2018-05-01

    Fish schooling is often modeled with self-propelled particles subject to phenomenological behavioral rules. Although fish are known to sense and exploit flow features, these models usually neglect hydrodynamics. Here, we propose a novel model that couples behavioral rules with far-field hydrodynamic interactions. We show that (1) a new "collective turning" phase emerges, (2) on average, individuals swim faster thanks to the fluid, and (3) the flow enhances behavioral noise. The results of this model suggest that hydrodynamic effects should be considered to fully understand the collective dynamics of fish.

  9. Quantification of structural uncertainties in multi-scale models; case study of the Lublin Basin, Poland

    Science.gov (United States)

    Małolepszy, Zbigniew; Szynkaruk, Ewa

    2015-04-01

    The multiscale static modeling of regional structure of the Lublin Basin is carried on in the Polish Geological Institute, in accordance with principles of integrated 3D geological modelling. The model is based on all available geospatial data from Polish digital databases and analogue archives. Mapped regional structure covers the area of 260x80 km located between Warsaw and Polish-Ukrainian border, along NW-SE-trending margin of the East European Craton. Within the basin, the Paleozoic beds with coalbearing Carboniferous and older formations containing hydrocarbons and unconventional prospects are covered unconformably by Permo-Mesozoic and younger rocks. Vertical extent of the regional model is set from topographic surface to 6000 m ssl and at the bottom includes some Proterozoic crystalline formations of the craton. The project focuses on internal consistency of the models built at different scales - from basin (small) scale to field-scale (large-scale). The models, nested in the common structural framework, are being constructed with regional geological knowledge, ensuring smooth transition in the 3D model resolution and amount of geological detail. Major challenge of the multiscale approach to subsurface modelling is the assessment and consistent quantification of various types of geological uncertainties tied to those various scale sub-models. Decreasing amount of information with depth and, particularly, very limited data collected below exploration targets, as well as accuracy and quality of data, all have the most critical impact on the modelled structure. In deeper levels of the Lublin Basin model, seismic interpretation of 2D surveys is sparsely tied to well data. Therefore time-to-depth conversion carries one of the major uncertainties in the modeling of structures, especially below 3000 m ssl. Furthermore, as all models at different scales are based on the same dataset, we must deal with different levels of generalization of geological structures. The

  10. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  11. Genome-scale biological models for industrial microbial systems.

    Science.gov (United States)

    Xu, Nan; Ye, Chao; Liu, Liming

    2018-04-01

    The primary aims and challenges associated with microbial fermentation include achieving faster cell growth, higher productivity, and more robust production processes. Genome-scale biological models, predicting the formation of an interaction among genetic materials, enzymes, and metabolites, constitute a systematic and comprehensive platform to analyze and optimize the microbial growth and production of biological products. Genome-scale biological models can help optimize microbial growth-associated traits by simulating biomass formation, predicting growth rates, and identifying the requirements for cell growth. With regard to microbial product biosynthesis, genome-scale biological models can be used to design product biosynthetic pathways, accelerate production efficiency, and reduce metabolic side effects, leading to improved production performance. The present review discusses the development of microbial genome-scale biological models since their emergence and emphasizes their pertinent application in improving industrial microbial fermentation of biological products.

  12. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  13. U(6)-phonon model of nuclear collective motion

    International Nuclear Information System (INIS)

    Ganev, H.G.

    2015-01-01

    The U(6)-phonon model of nuclear collective motion with the semi-direct product structure [HW(21)]U(6) is obtained as a hydrodynamic (macroscopic) limit of the fully microscopic proton–neutron symplectic model (PNSM) with Sp(12, R) dynamical group. The phonon structure of the [HW(21)]U(6) model enables it to simultaneously include the giant monopole and quadrupole, as well as dipole resonances and their coupling to the low-lying collective states. The U(6) intrinsic structure of the [HW(21)]U(6) model, from the other side, gives a framework for the simultaneous shell-model interpretation of the ground state band and the other excited low-lying collective bands. It follows then that the states of the whole nuclear Hilbert space which can be put into one-to-one correspondence with those of a 21-dimensional oscillator with an intrinsic (base) U(6) structure. The latter can be determined in such a way that it is compatible with the proton–neutron structure of the nucleus. The macroscopic limit of the Sp(12, R) algebra, therefore, provides a rigorous mechanism for implementing the unified model ideas of coupling the valence particles to the core collective degrees of freedom within a fully microscopic framework without introducing redundant variables or violating the Pauli principle. (author)

  14. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Science.gov (United States)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  15. Collective vs atomic models of the hadrons

    International Nuclear Information System (INIS)

    Stokar, S.

    1983-02-01

    We examine the relationship between heavy and light quark systems. Using a Bogoliubov-Valatin transformation we show how to interpolate continuously between heavy quark atomic models and light quark collective models of the hadrons. (author)

  16. New phenomena in the standard no-scale supergravity model

    CERN Document Server

    Kelley, S; Nanopoulos, Dimitri V; Zichichi, Antonino; Kelley, S; Lopez, J L; Nanopoulos, D V; Zichichi, A

    1994-01-01

    We revisit the no-scale mechanism in the context of the simplest no-scale supergravity extension of the Standard Model. This model has the usual five-dimensional parameter space plus an additional parameter \\xi_{3/2}\\equiv m_{3/2}/m_{1/2}. We show how predictions of the model may be extracted over the whole parameter space. A necessary condition for the potential to be stable is {\\rm Str}{\\cal M}^4>0, which is satisfied if \\bf m_{3/2}\\lsim2 m_{\\tilde q}. Order of magnitude calculations reveal a no-lose theorem guaranteeing interesting and potentially observable new phenomena in the neutral scalar sector of the theory which would constitute a ``smoking gun'' of the no-scale mechanism. This new phenomenology is model-independent and divides into three scenarios, depending on the ratio of the weak scale to the vev at the minimum of the no-scale direction. We also calculate the residual vacuum energy at the unification scale (C_0\\, m^4_{3/2}), and find that in typical models one must require C_0>10. Such constrai...

  17. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  18. Memoised Garbage Collection for Software Model Checking

    NARCIS (Netherlands)

    Nguyen, V.Y.; Ruys, T.C.; Kowalewski, S.; Philippou, A.

    Virtual machine based software model checkers like JPF and MoonWalker spend up to half of their veri��?cation time on garbage collection. This is no surprise as after nearly each transition the heap has to be cleaned from garbage. To improve this, this paper presents the Memoised Garbage Collection

  19. The ScaLIng Macroweather Model (SLIMM): using scaling to forecast global-scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-09-01

    On scales of ≈ 10 days (the lifetime of planetary-scale structures), there is a drastic transition from high-frequency weather to low-frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; thus, in GCM (general circulation model) macroweather forecasts, the weather is a high-frequency noise. However, neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developed that use empirical data to force the statistics and climate to be realistic so that even a two-parameter model can perform as well as GCMs for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the large stochastic memories that we quantify. Since macroweather temporal (but not spatial) intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the ScaLIng Macroweather Model (SLIMM). SLIMM is based on a stochastic ordinary differential equation, differing from usual linear stochastic models (such as the linear inverse modelling - LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes that there is no low-frequency memory, SLIMM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner, notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful stochastic forecasts of natural macroweather variability is to first remove the low-frequency anthropogenic component. A previous attempt to use fGn for forecasts had disappointing results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent

  20. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2016-05-01

    Full Text Available Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB.In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  1. Site-scale groundwater flow modelling of Aberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1998-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method`s self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the

  2. Site-scale groundwater flow modelling of Aberg

    International Nuclear Information System (INIS)

    Walker, D.; Gylling, B.

    1998-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method's self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the

  3. PERSEUS-HUB: Interactive and Collective Exploration of Large-Scale Graphs

    Directory of Open Access Journals (Sweden)

    Di Jin

    2017-07-01

    Full Text Available Graphs emerge naturally in many domains, such as social science, neuroscience, transportation engineering, and more. In many cases, such graphs have millions or billions of nodes and edges, and their sizes increase daily at a fast pace. How can researchers from various domains explore large graphs interactively and efficiently to find out what is ‘important’? How can multiple researchers explore a new graph dataset collectively and “help” each other with their findings? In this article, we present Perseus-Hub, a large-scale graph mining tool that computes a set of graph properties in a distributed manner, performs ensemble, multi-view anomaly detection to highlight regions that are worth investigating, and provides users with uncluttered visualization and easy interaction with complex graph statistics. Perseus-Hub uses a Spark cluster to calculate various statistics of large-scale graphs efficiently, and aggregates the results in a summary on the master node to support interactive user exploration. In Perseus-Hub, the visualized distributions of graph statistics provide preliminary analysis to understand a graph. To perform a deeper analysis, users with little prior knowledge can leverage patterns (e.g., spikes in the power-law degree distribution marked by other users or experts. Moreover, Perseus-Hub guides users to regions of interest by highlighting anomalous nodes and helps users establish a more comprehensive understanding about the graph at hand. We demonstrate our system through the case study on real, large-scale networks.

  4. The Multi-Scale Model Approach to Thermohydrology at Yucca Mountain

    International Nuclear Information System (INIS)

    Glascoe, L; Buscheck, T A; Gansemer, J; Sun, Y

    2002-01-01

    The Multi-Scale Thermo-Hydrologic (MSTH) process model is a modeling abstraction of them1 hydrology (TH) of the potential Yucca Mountain repository at multiple spatial scales. The MSTH model as described herein was used for the Supplemental Science and Performance Analyses (BSC, 2001) and is documented in detail in CRWMS M and O (2000) and Glascoe et al. (2002). The model has been validated to a nested grid model in Buscheck et al. (In Review). The MSTH approach is necessary for modeling thermal hydrology at Yucca Mountain for two reasons: (1) varying levels of detail are necessary at different spatial scales to capture important TH processes and (2) a fully-coupled TH model of the repository which includes the necessary spatial detail is computationally prohibitive. The MSTH model consists of six ''submodels'' which are combined in a manner to reduce the complexity of modeling where appropriate. The coupling of these models allows for appropriate consideration of mountain-scale thermal hydrology along with the thermal hydrology of drift-scale discrete waste packages of varying heat load. Two stages are involved in the MSTH approach, first, the execution of submodels, and second, the assembly of submodels using the Multi-scale Thermohydrology Abstraction Code (MSTHAC). MSTHAC assembles the submodels in a five-step process culminating in the TH model output of discrete waste packages including a mountain-scale influence

  5. Scaled Experimental Modeling of VHTR Plenum Flows

    Energy Technology Data Exchange (ETDEWEB)

    ICONE 15

    2007-04-01

    Abstract The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. Various scaled heated gas and water flow facilities were investigated for modeling VHTR upper and lower plenum flows during the decay heat portion of a pressurized conduction-cooldown scenario and for modeling thermal mixing and stratification (“thermal striping”) in the lower plenum during normal operation. It was concluded, based on phenomena scaling and instrumentation and other practical considerations, that a heated water flow scale model facility is preferable to a heated gas flow facility and to unheated facilities which use fluids with ranges of density to simulate the density effect of heating. For a heated water flow lower plenum model, both the Richardson numbers and Reynolds numbers may be approximately matched for conduction-cooldown natural circulation conditions. Thermal mixing during normal operation may be simulated but at lower, but still fully turbulent, Reynolds numbers than in the prototype. Natural circulation flows in the upper plenum may also be simulated in a separate heated water flow facility that uses the same plumbing as the lower plenum model. However, Reynolds number scaling distortions will occur at matching Richardson numbers due primarily to the necessity of using a reduced number of channels connected to the plenum than in the prototype (which has approximately 11,000 core channels connected to the upper plenum) in an otherwise geometrically scaled model. Experiments conducted in either or both facilities will meet the objectives of providing benchmark data for the validation of codes proposed for NGNP designs and safety studies, as well as providing a better understanding of the complex flow phenomena in the plenums.

  6. Forecasting rain events - Meteorological models or collective intelligence?

    Science.gov (United States)

    Arazy, Ofer; Halfon, Noam; Malkinson, Dan

    2015-04-01

    Collective intelligence is shared (or group) intelligence that emerges from the collective efforts of many individuals. Collective intelligence is the aggregate of individual contributions: from simple collective decision making to more sophisticated aggregations such as in crowdsourcing and peer-production systems. In particular, collective intelligence could be used in making predictions about future events, for example by using prediction markets to forecast election results, stock prices, or the outcomes of sport events. To date, there is little research regarding the use of collective intelligence for prediction of weather forecasting. The objective of this study is to investigate the extent to which collective intelligence could be utilized to accurately predict weather events, and in particular rainfall. Our analyses employ metrics of group intelligence, as well as compare the accuracy of groups' predictions against the predictions of the standard model used by the National Meteorological Services. We report on preliminary results from a study conducted over the 2013-2014 and 2014-2015 winters. We have built a web site that allows people to make predictions on precipitation levels on certain locations. During each competition participants were allowed to enter their precipitation forecasts (i.e. 'bets') at three locations and these locations changed between competitions. A precipitation competition was defined as a 48-96 hour period (depending on the expected weather conditions), bets were open 24-48 hours prior to the competition, and during betting period participants were allowed to change their bets with no limitation. In order to explore the effect of transparency, betting mechanisms varied across study's sites: full transparency (participants able to see each other's bets); partial transparency (participants see the group's average bet); and no transparency (no information of others' bets is made available). Several interesting findings emerged from

  7. The restricted stochastic user equilibrium with threshold model: Large-scale application and parameter testing

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Nielsen, Otto Anker; Watling, David P.

    2017-01-01

    Equilibrium model (DUE), by combining the strengths of the Boundedly Rational User Equilibrium model and the Restricted Stochastic User Equilibrium model (RSUE). Thereby, the RSUET model reaches an equilibrated solution in which the flow is distributed according to Random Utility Theory among a consistently...... model improves the behavioural realism, especially for high congestion cases. Also, fast and well-behaved convergence to equilibrated solutions among non-universal choice sets is observed across different congestion levels, choice model scale parameters, and algorithm step sizes. Clearly, the results...... highlight that the RSUET outperforms the MNP SUE in terms of convergence, calculation time and behavioural realism. The choice set composition is validated by using 16,618 observed route choices collected by GPS devices in the same network and observing their reproduction within the equilibrated choice sets...

  8. Post Audit of a Field Scale Reactive Transport Model of Uranium at a Former Mill Site

    Science.gov (United States)

    Curtis, G. P.

    2015-12-01

    Reactive transport of hexavalent uranium (U(VI)) in a shallow alluvial aquifer at a former uranium mill tailings site near Naturita CO has been monitored for nearly 30 years by the US Department of Energy and the US Geological Survey. Groundwater at the site has high concentrations of chloride, alkalinity and U(VI) as a owing to ore processing at the site from 1941 to 1974. We previously calibrated a multicomponent reactive transport model to data collected at the site from 1986 to 2001. A two dimensional nonreactive transport model used a uniform hydraulic conductivity which was estimated from observed chloride concentrations and tritium helium age dates. A reactive transport model for the 2km long site was developed by including an equilibrium U(VI) surface complexation model calibrated to laboratory data and calcite equilibrium. The calibrated model reproduced both nonreactive tracers as well as the observed U(VI), pH and alkalinity. Forward simulations for the period 2002-2015 conducted with the calibrated model predict significantly faster natural attenuation of U(VI) concentrations than has been observed by the persistent high U(VI) concentrations at the site. Alternative modeling approaches are being evaluating evaluated using recent data to determine if the persistence can be explained by multirate mass transfer models developed from experimental observations at the column scale(~0.2m), the laboratory tank scale (~2m), the field tracer test scale (~1-4m) or geophysical observation scale (~1-5m). Results of this comparison should provide insight into the persistence of U(VI) plumes and improved management options.

  9. Biology meets Physics: Reductionism and Multi-scale Modeling of Morphogenesis

    DEFF Research Database (Denmark)

    Green, Sara; Batterman, Robert

    2017-01-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism ...... modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent....... from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom......-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the “tyranny of scales” problem present a challenge to reductive explanations in both physics and biology. The problem refers to the scale...

  10. Comparative growth models of big-scale sand smelt (Atherina boyeri Risso, 1810 sampled from Hirfanll Dam Lake, Klrsehir, Ankara, Turkey

    Directory of Open Access Journals (Sweden)

    S. Benzer

    2017-06-01

    Full Text Available In this current publication the growth characteristics of big-scale sand smelt data were compared for population dynamics within artificial neural networks and length-weight relationships models. This study aims to describe the optimal decision of the growth model of big-scale sand smelt by artificial neural networks and length-weight relationships models at Hirfanll Dam Lake, Klrsehir, Turkey. There were a total of 1449 samples collected from Hirfanll Dam Lake between May 2015 and May 2016. Both model results were compared with each other and the results were also evaluated with MAPE (mean absolute percentage error, MSE (mean squared error and r2 (coefficient correlation data as a performance criterion. The results of the current study show that artificial neural networks is a superior estimation tool compared to length-weight relationships models of big-scale sand smelt in Hirfanll Dam Lake.

  11. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  12. Energy spectrum scaling in an agent-based model for bacterial turbulence

    Science.gov (United States)

    Mikel-Stites, Maxwell; Staples, Anne

    2017-11-01

    Numerous models have been developed to examine the behavior of dense bacterial swarms and to explore the visually striking phenomena of bacterial turbulence. Most models directly impose fluid dynamics physics, either by modeling the active matter as a fluid or by including interactions between the bacteria and a fluid. In this work, however, the `turbulence' is solely an emergent property of the collective behavior of the bacterial population, rather than a consequence of imposed fluid dynamics physical modeling. The system is simulated using a two dimensional Vicsek-style model, with the addition of individual repulsion to simulate bacterial collisions and physical interactions, and without the common flocking or sensing behaviors. Initial results indicate the presence of k-1 scaling in a portion of the kinetic energy spectrum that can be considered analogous to the inertial subrange in turbulent energy spectra. This result suggests that the interaction of large numbers of individual active bacteria may also be a contributing factor in the emergence of fluid dynamics phenomena, in addition to the physical interactions between bacteria and their fluid environment.

  13. Models of Small-Scale Patchiness

    Science.gov (United States)

    McGillicuddy, D. J.

    2001-01-01

    Patchiness is perhaps the most salient characteristic of plankton populations in the ocean. The scale of this heterogeneity spans many orders of magnitude in its spatial extent, ranging from planetary down to microscale. It has been argued that patchiness plays a fundamental role in the functioning of marine ecosystems, insofar as the mean conditions may not reflect the environment to which organisms are adapted. Understanding the nature of this patchiness is thus one of the major challenges of oceanographic ecology. The patchiness problem is fundamentally one of physical-biological-chemical interactions. This interconnection arises from three basic sources: (1) ocean currents continually redistribute dissolved and suspended constituents by advection; (2) space-time fluctuations in the flows themselves impact biological and chemical processes, and (3) organisms are capable of directed motion through the water. This tripartite linkage poses a difficult challenge to understanding oceanic ecosystems: differentiation between the three sources of variability requires accurate assessment of property distributions in space and time, in addition to detailed knowledge of organismal repertoires and the processes by which ambient conditions control the rates of biological and chemical reactions. Various methods of observing the ocean tend to lie parallel to the axes of the space/time domain in which these physical-biological-chemical interactions take place. Given that a purely observational approach to the patchiness problem is not tractable with finite resources, the coupling of models with observations offers an alternative which provides a context for synthesis of sparse data with articulations of fundamental principles assumed to govern functionality of the system. In a sense, models can be used to fill the gaps in the space/time domain, yielding a framework for exploring the controls on spatially and temporally intermittent processes. The following discussion highlights

  14. Properties of Brownian Image Models in Scale-Space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup

    2003-01-01

    Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...

  15. Spatiotemporal exploratory models for broad-scale survey data.

    Science.gov (United States)

    Fink, Daniel; Hochachka, Wesley M; Zuckerberg, Benjamin; Winkler, David W; Shaby, Ben; Munson, M Arthur; Hooker, Giles; Riedewald, Mirek; Sheldon, Daniel; Kelling, Steve

    2010-12-01

    The distributions of animal populations change and evolve through time. Migratory species exploit different habitats at different times of the year. Biotic and abiotic features that determine where a species lives vary due to natural and anthropogenic factors. This spatiotemporal variation needs to be accounted for in any modeling of species' distributions. In this paper we introduce a semiparametric model that provides a flexible framework for analyzing dynamic patterns of species occurrence and abundance from broad-scale survey data. The spatiotemporal exploratory model (STEM) adds essential spatiotemporal structure to existing techniques for developing species distribution models through a simple parametric structure without requiring a detailed understanding of the underlying dynamic processes. STEMs use a multi-scale strategy to differentiate between local and global-scale spatiotemporal structure. A user-specified species distribution model accounts for spatial and temporal patterning at the local level. These local patterns are then allowed to "scale up" via ensemble averaging to larger scales. This makes STEMs especially well suited for exploring distributional dynamics arising from a variety of processes. Using data from eBird, an online citizen science bird-monitoring project, we demonstrate that monthly changes in distribution of a migratory species, the Tree Swallow (Tachycineta bicolor), can be more accurately described with a STEM than a conventional bagged decision tree model in which spatiotemporal structure has not been imposed. We also demonstrate that there is no loss of model predictive power when a STEM is used to describe a spatiotemporal distribution with very little spatiotemporal variation; the distribution of a nonmigratory species, the Northern Cardinal (Cardinalis cardinalis).

  16. Collective models of transition nuclei Pt. 2

    International Nuclear Information System (INIS)

    Dombradi, Zs.

    1982-01-01

    The models describing the even-odd and odd-odd transition nuclei (nuclei of moderate ground state deformation) are reviewed. The nuclear core is described by models of even-even nuclei, and the interaction of a single particle and the core is added. Different models of particle-core coupling (phenomenological models, collective models, nuclear field theory, interacting boson-fermion model, vibration nucleon cluster model) and their results are discussed. New developments like dynamical supersymmetry and new research trends are summarized. (D.Gy.)

  17. Logarithmic corrections to scaling in the XY2-model

    International Nuclear Information System (INIS)

    Kenna, R.; Irving, A.C.

    1995-01-01

    We study the distribution of partition function zeroes for the XY-model in two dimensions. In particular we find the scaling behaviour of the end of the distribution of zeroes in the complex external magnetic field plane in the thermodynamic limit (the Yang-Lee edge) and the form for the density of these zeroes. Assuming that finite-size scaling holds, we show that there have to exist logarithmic corrections to the leading scaling behaviour of thermodynamic quantities in this model. These logarithmic corrections are also manifest in the finite-size scaling formulae and we identify them numerically. The method presented here can be used to check the compatibility of scaling behaviour of odd and even thermodynamic functions in other models too. ((orig.))

  18. Multi-scale Modeling of Plasticity in Tantalum.

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weinberger, Christopher [Drexel Univ., Philadelphia, PA (United States)

    2015-12-01

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct

  19. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    Science.gov (United States)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real

  20. Probabilistic, meso-scale flood loss modelling

    Science.gov (United States)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  1. Two-dimensional divertor modeling and scaling laws

    International Nuclear Information System (INIS)

    Catto, P.J.; Connor, J.W.; Knoll, D.A.

    1996-01-01

    Two-dimensional numerical models of divertors contain large numbers of dimensionless parameters that must be varied to investigate all operating regimes of interest. To simplify the task and gain insight into divertor operation, we employ similarity techniques to investigate whether model systems of equations plus boundary conditions in the steady state admit scaling transformations that lead to useful divertor similarity scaling laws. A short mean free path neutral-plasma model of the divertor region below the x-point is adopted in which all perpendicular transport is due to the neutrals. We illustrate how the results can be used to benchmark large computer simulations by employing a modified version of UEDGE which contains a neutral fluid model. (orig.)

  2. Evaluation of collective doses on the European scale arising from atmospheric discharges

    International Nuclear Information System (INIS)

    Despres, A.; Le Grand, J.; Bouville, A.; Guezengar, J.-M.

    1980-01-01

    The aim of this work is the calculation of annual collective doses received by the population of the European Community as a result of routine atmospheric releases from a nuclear plant. The annual release is broken down into 12-hour steps and the calculation carried out for each of these steps. Summing the contribution from each step allow: one to calculate the time integrated annual atmospheric concentration in each point of a grid covering Western Europe. The collective doses due to external irradiation and to inhalation are then obtained by superimposing the population distribution over the same area. The computer model comprises the following three steps: Calculation of the trajectories followed by the polluant, derived from the meteorological data; the individual trajectories do not follow a straight line as they are corrected every 6 hours. Calculation of the atmospheric concentrations associated with those trajectories. Calculation of the collective doses from external irradiation and from inhalation, using the population grid. This computer model is applied to hypothetical discharges of 85 Kr, 13 +H1I, and 239 Pu, from the Centre d'Etudes Nucleaires de Saclay for the years 1975 and 1976. The comparison of the results obtained from the three radionuclides allows one to assess the influence of the radioactive half-life and of the dry deposition effects on the collective doses. The results were also compared to those obtained using the usual model in which the pollutant trajectory is a straight line. Finally: the releases were classified according to the wind direction at the point of emission in order to study the variation of the collective dose as a function of that parameter. (H.K.)

  3. 3-3-1 models at electroweak scale

    International Nuclear Information System (INIS)

    Dias, Alex G.; Montero, J.C.; Pleitez, V.

    2006-01-01

    We show that in 3-3-1 models there exist a natural relation among the SU(3) L coupling constant g, the electroweak mixing angle θ W , the mass of the W, and one of the vacuum expectation values, which implies that those models can be realized at low energy scales and, in particular, even at the electroweak scale. So that, being that symmetries realized in Nature, new physics may be really just around the corner

  4. Programmatic access to logical models in the Cell Collective modeling environment via a REST API.

    Science.gov (United States)

    Kowal, Bryan M; Schreier, Travis R; Dauer, Joseph T; Helikar, Tomáš

    2016-01-01

    Cell Collective (www.cellcollective.org) is a web-based interactive environment for constructing, simulating and analyzing logical models of biological systems. Herein, we present a Web service to access models, annotations, and simulation data in the Cell Collective platform through the Representational State Transfer (REST) Application Programming Interface (API). The REST API provides a convenient method for obtaining Cell Collective data through almost any programming language. To ensure easy processing of the retrieved data, the request output from the API is available in a standard JSON format. The Cell Collective REST API is freely available at http://thecellcollective.org/tccapi. All public models in Cell Collective are available through the REST API. For users interested in creating and accessing their own models through the REST API first need to create an account in Cell Collective (http://thecellcollective.org). thelikar2@unl.edu. Technical user documentation: https://goo.gl/U52GWo. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  6. Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges

    Science.gov (United States)

    Bouchaud, Jean-Philippe

    2013-05-01

    Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.

  7. Learning and knowing collectively

    International Nuclear Information System (INIS)

    Norgaard, Richard B.

    2004-01-01

    Scholars from multiple epistemic communities using a variety of models and approaches are working together to understand climate change, biodiversity loss, and other large-scale phenomena stemming from how people interact with the environment. No single model is adequate, no single mind can grasp the multiple models and their numerous implications. Yet collectively, scientists are putting the parts together and reaching a shared understanding. How is this happening, how can it be done better, and what are the implications for ecological economics?

  8. Analysis, scale modeling, and full-scale tests of low-level nuclear-waste-drum response to accident environments

    International Nuclear Information System (INIS)

    Huerta, M.; Lamoreaux, G.H.; Romesberg, L.E.; Yoshimura, H.R.; Joseph, B.J.; May, R.A.

    1983-01-01

    This report describes extensive full-scale and scale-model testing of 55-gallon drums used for shipping low-level radioactive waste materials. The tests conducted include static crush, single-can impact tests, and side impact tests of eight stacked drums. Static crush forces were measured and crush energies calculated. The tests were performed in full-, quarter-, and eighth-scale with different types of waste materials. The full-scale drums were modeled with standard food product cans. The response of the containers is reported in terms of drum deformations and lid behavior. The results of the scale model tests are correlated to the results of the full-scale drums. Two computer techniques for calculating the response of drum stacks are presented. 83 figures, 9 tables

  9. Scale changes in air quality modelling and assessment of associated uncertainties

    International Nuclear Information System (INIS)

    Korsakissok, Irene

    2009-01-01

    After an introduction of issues related to a scale change in the field of air quality (existing scales for emissions, transport, turbulence and loss processes, hierarchy of data and models, methods of scale change), the author first presents Gaussian models which have been implemented within the Polyphemus modelling platform. These models are assessed by comparison with experimental observations and with other commonly used Gaussian models. The second part reports the coupling of the puff-based Gaussian model with the Eulerian Polair3D model for the sub-mesh processing of point sources. This coupling is assessed at the continental scale for a passive tracer, and at the regional scale for photochemistry. Different statistical methods are assessed

  10. Lichen elemental content bioindicators for air quality in upper Midwest, USA: A model for large-scale monitoring

    Science.gov (United States)

    Susan Will-Wolf; Sarah Jovan; Michael C. Amacher

    2017-01-01

    Our development of lichen elemental bioindicators for a United States of America (USA) national monitoring program is a useful model for other large-scale programs. Concentrations of 20 elements were measured, validated, and analyzed for 203 samples of five common lichen species. Collections were made by trained non-specialists near 75 permanent plots and an expert...

  11. Scaling of musculoskeletal models from static and dynamic trials

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark

    2015-01-01

    Subject-specific scaling of cadaver-based musculoskeletal models is important for accurate musculoskeletal analysis within multiple areas such as ergonomics, orthopaedics and occupational health. We present two procedures to scale ‘generic’ musculoskeletal models to match segment lengths and joint...... three scaling methods to an inverse dynamics-based musculoskeletal model and compared predicted knee joint contact forces to those measured with an instrumented prosthesis during gait. Additionally, a Monte Carlo study was used to investigate the sensitivity of the knee joint contact force to random...

  12. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  13. Holographic models with anisotropic scaling

    Science.gov (United States)

    Brynjolfsson, E. J.; Danielsson, U. H.; Thorlacius, L.; Zingg, T.

    2013-12-01

    We consider gravity duals to d+1 dimensional quantum critical points with anisotropic scaling. The primary motivation comes from strongly correlated electron systems in condensed matter theory but the main focus of the present paper is on the gravity models in their own right. Physics at finite temperature and fixed charge density is described in terms of charged black branes. Some exact solutions are known and can be used to obtain a maximally extended spacetime geometry, which has a null curvature singularity inside a single non-degenerate horizon, but generic black brane solutions in the model can only be obtained numerically. Charged matter gives rise to black branes with hair that are dual to the superconducting phase of a holographic superconductor. Our numerical results indicate that holographic superconductors with anisotropic scaling have vanishing zero temperature entropy when the back reaction of the hair on the brane geometry is taken into account.

  14. Use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio

    Directory of Open Access Journals (Sweden)

    Fidel Ernesto Castro Morales

    2016-03-01

    Full Text Available Abstract Objectives: to propose the use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio, including possible confounders. Methods: data from 26 singleton pregnancies with gestational age at birth between 37 and 42 weeks were analyzed. The placentas were collected immediately after delivery and stored under refrigeration until the time of analysis, which occurred within up to 12 hours. Maternal data were collected from medical records. A Bayesian hierarchical model was proposed and Markov chain Monte Carlo simulation methods were used to obtain samples from distribution a posteriori. Results: the model developed showed a reasonable fit, even allowing for the incorporation of variables and a priori information on the parameters used. Conclusions: new variables can be added to the modelfrom the available code, allowing many possibilities for data analysis and indicating the potential for use in research on the subject.

  15. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    Science.gov (United States)

    Nawalany, Marek; Sinicyn, Grzegorz

    2015-09-01

    An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  16. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    Directory of Open Access Journals (Sweden)

    Nawalany Marek

    2015-09-01

    Full Text Available An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i spatial extent and geometry of hydrogeological system, (ii spatial continuity and granularity of both natural and man-made objects within the system, (iii duration of the system and (iv continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scalescale of pores, meso-scalescale of laboratory sample, macro-scalescale of typical blocks in numerical models of groundwater flow, local-scalescale of an aquifer/aquitard and regional-scalescale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here. Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  17. Allometric Scaling and Resource Limitations Model of Total Aboveground Biomass in Forest Stands: Site-scale Test of Model

    Science.gov (United States)

    CHOI, S.; Shi, Y.; Ni, X.; Simard, M.; Myneni, R. B.

    2013-12-01

    Sparseness in in-situ observations has precluded the spatially explicit and accurate mapping of forest biomass. The need for large-scale maps has raised various approaches implementing conjugations between forest biomass and geospatial predictors such as climate, forest type, soil property, and topography. Despite the improved modeling techniques (e.g., machine learning and spatial statistics), a common limitation is that biophysical mechanisms governing tree growth are neglected in these black-box type models. The absence of a priori knowledge may lead to false interpretation of modeled results or unexplainable shifts in outputs due to the inconsistent training samples or study sites. Here, we present a gray-box approach combining known biophysical processes and geospatial predictors through parametric optimizations (inversion of reference measures). Total aboveground biomass in forest stands is estimated by incorporating the Forest Inventory and Analysis (FIA) and Parameter-elevation Regressions on Independent Slopes Model (PRISM). Two main premises of this research are: (a) The Allometric Scaling and Resource Limitations (ASRL) theory can provide a relationship between tree geometry and local resource availability constrained by environmental conditions; and (b) The zeroth order theory (size-frequency distribution) can expand individual tree allometry into total aboveground biomass at the forest stand level. In addition to the FIA estimates, two reference maps from the National Biomass and Carbon Dataset (NBCD) and U.S. Forest Service (USFS) were produced to evaluate the model. This research focuses on a site-scale test of the biomass model to explore the robustness of predictors, and to potentially improve models using additional geospatial predictors such as climatic variables, vegetation indices, soil properties, and lidar-/radar-derived altimetry products (or existing forest canopy height maps). As results, the optimized ASRL estimates satisfactorily

  18. Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics

    Science.gov (United States)

    Saeedi, Sara

    2018-06-01

    With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving

  19. Quantum critical scaling of fidelity in BCS-like model

    International Nuclear Information System (INIS)

    Adamski, Mariusz; Jedrzejewski, Janusz; Krokhmalskii, Taras

    2013-01-01

    We study scaling of the ground-state fidelity in neighborhoods of quantum critical points in a model of interacting spinful fermions—a BCS-like model. Due to the exact diagonalizability of the model, in one and higher dimensions, scaling of the ground-state fidelity can be analyzed numerically with great accuracy, not only for small systems but also for macroscopic ones, together with the crossover region between them. Additionally, in the one-dimensional case we have been able to derive a number of analytical formulas for fidelity and show that they accurately fit our numerical results; these results are reported in the paper. Besides regular critical points and their neighborhoods, where well-known scaling laws are obeyed, there is the multicritical point and critical points in its proximity where anomalous scaling behavior is found. We also consider scaling of fidelity in neighborhoods of critical points where fidelity oscillates strongly as the system size or the chemical potential is varied. Our results for a one-dimensional version of a BCS-like model are compared with those obtained recently by Rams and Damski in similar studies of a quantum spin chain—an anisotropic XY model in a transverse magnetic field. (paper)

  20. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  1. Modelling solute dispersion in periodic heterogeneous porous media: Model benchmarking against intermediate scale experiments

    Science.gov (United States)

    Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham

    2018-06-01

    This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.

  2. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  3. Phenomenological aspects of no-scale inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics, King' s College London, WC2R 2LS London (United Kingdom); Garcia, Marcos A.G.; Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V., E-mail: john.ellis@cern.ch, E-mail: garciagarcia@physics.umn.edu, E-mail: dimitri@physics.tamu.edu, E-mail: olive@physics.umn.edu [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, 77843 Texas (United States)

    2015-10-01

    We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0} = B{sub 0} = A{sub 0} = 0, of the CMSSM type with universal A{sub 0} and m{sub 0} ≠ 0 at a high scale, and of the mSUGRA type with A{sub 0} = B{sub 0} + m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2} ≠ 0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.

  4. Phenomenological aspects of no-scale inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics,King’s College London,WC2R 2LS London (United Kingdom); Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Garcia, Marcos A.G. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics andAstronomy, Texas A& M University,College Station, 77843 Texas (United States); Astroparticle Physics Group, Houston Advanced Research Center (HARC), Mitchell Campus, Woodlands, 77381 Texas (United States); Academy of Athens, Division of Natural Sciences, 28 Panepistimiou Avenue, 10679 Athens (Greece); Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States)

    2015-10-01

    We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0}=B{sub 0}=A{sub 0}=0, of the CMSSM type with universal A{sub 0} and m{sub 0}≠0 at a high scale, and of the mSUGRA type with A{sub 0}=B{sub 0}+m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2}≠0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.

  5. Remarks on the microscopic derivation of the collective model

    International Nuclear Information System (INIS)

    Toyoda, T.; Wildermuth, K.

    1984-01-01

    The rotational part of the phenomenological collective model of Bohr and Mottelson and others is derived microscopically, starting with the Schrodinger equation written in projection form and introducing a new set of 'relative Euler angles'. In order to derive the local Schrodinger equation of the collective model, it is assumed that the intrinsic wave functions give strong peaking properties to the overlapping kernels

  6. Scale effect challenges in urban hydrology highlighted with a distributed hydrological model

    Science.gov (United States)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2018-01-01

    Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration by innovative methods of model resolution alteration based on the spatial data variability and scaling of flows in urban hydrology.

  7. Scaling considerations for modeling the in situ vitrification process

    International Nuclear Information System (INIS)

    Langerman, M.A.; MacKinnon, R.J.

    1990-09-01

    Scaling relationships for modeling the in situ vitrification waste remediation process are documented based upon similarity considerations derived from fundamental principles. Requirements for maintaining temperature and electric potential field similarity between the model and the prototype are determined as well as requirements for maintaining similarity in off-gas generation rates. A scaling rationale for designing reduced-scale experiments is presented and the results are assessed numerically. 9 refs., 6 figs

  8. The use of TOUGH2 for the LBL/USGS 3-dimensional site-scale model of Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Bodvarsson, G.; Chen, G.; Haukwa, C.; Kwicklis, E.

    1995-01-01

    The three-dimensional site-scale numerical model o the unsaturated zone at Yucca Mountain is under continuous development and calibration through a collaborative effort between Lawrence Berkeley Laboratory (LBL) and the United States Geological Survey (USGS). The site-scale model covers an area of about 30 km 2 and is bounded by major fault zones to the west (Solitario Canyon Fault), east (Bow Ridge Fault) and perhaps to the north by an unconfirmed fault (Yucca Wash Fault). The model consists of about 5,000 grid blocks (elements) with nearly 20,000 connections between them; the grid was designed to represent the most prevalent geological and hydro-geological features of the site including major faults, and layering and bedding of the hydro-geological units. Submodels are used to investigate specific hypotheses and their importance before incorporation into the three-dimensional site-scale model. The primary objectives of the three-dimensional site-scale model are to: (1) quantify moisture, gas and heat flows in the ambient conditions at Yucca Mountain, (2) help in guiding the site-characterization effort (primarily by USGS) in terms of additional data needs and to identify regions of the mountain where sufficient data have been collected, and (3) provide a reliable model of Yucca Mountain that is validated by repeated predictions of conditions in new boreboles and the ESF and has therefore the confidence of the public and scientific community. The computer code TOUGH2 developed by K. Pruess at LBL was used along with the three-dimensional site-scale model to generate these results. In this paper, we also describe the three-dimensional site-scale model emphasizing the numerical grid development, and then show some results in terms of moisture, gas and heat flow

  9. Nucleon electric dipole moments in high-scale supersymmetric models

    International Nuclear Information System (INIS)

    Hisano, Junji; Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi

    2015-01-01

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  10. Nucleon electric dipole moments in high-scale supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Hisano, Junji [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI),Nagoya University,Nagoya 464-8602 (Japan); Department of Physics, Nagoya University,Nagoya 464-8602 (Japan); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8584 (Japan); Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi [Department of Physics, Nagoya University,Nagoya 464-8602 (Japan)

    2015-11-12

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  11. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  12. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    Science.gov (United States)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the

  13. Scaling, soil moisture and evapotranspiration in runoff models

    Science.gov (United States)

    Wood, Eric F.

    1993-01-01

    The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in the land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, the probability distribution for evaporation is derived which illustrates the conditions for which scaling should work. A correction algorithm that may appropriate for the land parameterization of a GCM is derived using a 2nd order linearization scheme. The performance of the algorithm is evaluated.

  14. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  15. Groundwater development stress: Global-scale indices compared to regional modeling

    Science.gov (United States)

    Alley, William; Clark, Brian R.; Ely, Matt; Faunt, Claudia

    2018-01-01

    The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem.

  16. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  17. [Unfolding item response model using best-worst scaling].

    Science.gov (United States)

    Ikehara, Kazuya

    2015-02-01

    In attitude measurement and sensory tests, the unfolding model is typically used. In this model, response probability is formulated by the distance between the person and the stimulus. In this study, we proposed an unfolding item response model using best-worst scaling (BWU model), in which a person chooses the best and worst stimulus among repeatedly presented subsets of stimuli. We also formulated an unfolding model using best scaling (BU model), and compared the accuracy of estimates between the BU and BWU models. A simulation experiment showed that the BWU modell performed much better than the BU model in terms of bias and root mean square errors of estimates. With reference to Usami (2011), the proposed models were apllied to actual data to measure attitudes toward tardiness. Results indicated high similarity between stimuli estimates generated with the proposed models and those of Usami (2011).

  18. Accounting for small scale heterogeneity in ecohydrologic watershed models

    Science.gov (United States)

    Burke, W.; Tague, C.

    2017-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach

  19. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  20. Global fits of GUT-scale SUSY models with GAMBIT

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  1. Global fits of GUT-scale SUSY models with GAMBIT

    Energy Technology Data Exchange (ETDEWEB)

    Athron, Peter [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); H. Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, CNRS, ENS de Lyon, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); Theoretical Physics Department, CERN, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Camperdown, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration

    2017-12-15

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos. (orig.)

  2. Modeling the intersections of Food, Energy, and Water in climate-vulnerable Ethiopia with an application to small-scale irrigation

    Science.gov (United States)

    Zhang, Y.; Sankaranarayanan, S.; Zaitchik, B. F.; Siddiqui, S.

    2017-12-01

    Ethiopia, but that also intersects with energy and water consumption. Here, we focus on the energy usage for small-scale irrigation and the collective impact on crop production and water resources across zones in the MME model.

  3. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  4. Site-scale groundwater flow modelling of Beberg

    International Nuclear Information System (INIS)

    Gylling, B.; Walker, D.; Hartley, L.

    1999-08-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of ε f 10 -4 , and a flow-wetted surface of a r = 1.0 m 2 /(m 3 rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10 -3 m/year. The median F-ratio is 5.6 x 10 5 year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates that the change in hydraulic gradient

  5. Multi-scale modeling of dispersed gas-liquid two-phase flow

    NARCIS (Netherlands)

    Deen, N.G.; Sint Annaland, van M.; Kuipers, J.A.M.

    2004-01-01

    In this work the concept of multi-scale modeling is demonstrated. The idea of this approach is to use different levels of modeling, each developed to study phenomena at a certain length scale. Information obtained at the level of small length scales can be used to provide closure information at the

  6. Dynamic subgrid scale model of large eddy simulation of cross bundle flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1996-01-01

    The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  7. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    The collective behaviour of groups of social animals has been an active topic of study ... Models have been successful at reproducing qualitative features of ... quantitative and detailed empirical results for a range of animal systems. ... standard method [23], the redundant information recorded by the cameras can be used to.

  8. Ozone flux of an urban orange grove: multiple scaled measurements and model comparisons

    Science.gov (United States)

    Alstad, K. P.; Grulke, N. E.; Jenerette, D. G.; Schilling, S.; Marrett, K.

    2009-12-01

    There is significant uncertainty about the ozone sink properties of the phytosphere due to a complexity of interactions and feedbacks with biotic and abiotic factors. Improved understanding of the controls on ozone fluxes is critical to estimating and regulating the total ozone budget. Ozone exchanges of an orange orchard within the city of Riverside, CA were examined using a multiple-scaled approach. We access the carbon, water, and energy budgets at the stand- to leaf- level to elucidate the mechanisms controlling the variability in ozone fluxes of this agro-ecosystem. The two initial goals of the study were 1. To consider variations and controls on the ozone fluxes within the canopy; and, 2. To examine different modeling and scaling approaches for totaling the ozone fluxes of this orchard. Current understanding of the total ozone flux between the atmosphere near ground and the phytosphere (F-total) include consideration of a fraction which is absorbed by vegetation through stomatal uptake (F-absorb), and fractional components of deposition on external, non-stomatal, surfaces of the vegetation (F-external) and soil (F-soil). Multiplicative stomatal-conductance models have been commonly used to estimate F-absorb, since this flux cannot be measured directly. We approach F-absorb estimates for this orange orchard using chamber measurement of leaf stomatal-conductance, as well as non-chamber sap-conductance collected on branches of varied aspect and sun/shade conditions within the canopy. We use two approaches to measure the F-total of this stand. Gradient flux profiles were measured using slow-response ozone sensors collecting within and above the canopy (4.6 m), and at the top of the tower (8.5 m). In addition, an eddy-covariance system fitted with a high-frequency chemiluminescence ozone system will be deployed (8.5 m). Preliminary ozone gradient flux profiles demonstrate a substantial ozone sink strength of this orchard, with diurnal concentration differentials

  9. Incorporating Protein Biosynthesis into the Saccharomyces cerevisiae Genome-scale Metabolic Model

    DEFF Research Database (Denmark)

    Olivares Hernandez, Roberto

    Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been construc......Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been...

  10. [Modeling continuous scaling of NDVI based on fractal theory].

    Science.gov (United States)

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  11. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  12. Multi-Scale Modelling of Deformation and Fracture in a Biomimetic Apatite-Protein Composite: Molecular-Scale Processes Lead to Resilience at the μm-Scale.

    Directory of Open Access Journals (Sweden)

    Dirk Zahn

    Full Text Available Fracture mechanisms of an enamel-like hydroxyapatite-collagen composite model are elaborated by means of molecular and coarse-grained dynamics simulation. Using fully atomistic models, we uncover molecular-scale plastic deformation and fracture processes initiated at the organic-inorganic interface. Furthermore, coarse-grained models are developed to investigate fracture patterns at the μm-scale. At the meso-scale, micro-fractures are shown to reduce local stress and thus prevent material failure after loading beyond the elastic limit. On the basis of our multi-scale simulation approach, we provide a molecular scale rationalization of this phenomenon, which seems key to the resilience of hierarchical biominerals, including teeth and bone.

  13. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  14. Multi-scale modeling in morphogenesis: a critical analysis of the cellular Potts model.

    Directory of Open Access Journals (Sweden)

    Anja Voss-Böhme

    Full Text Available Cellular Potts models (CPMs are used as a modeling framework to elucidate mechanisms of biological development. They allow a spatial resolution below the cellular scale and are applied particularly when problems are studied where multiple spatial and temporal scales are involved. Despite the increasing usage of CPMs in theoretical biology, this model class has received little attention from mathematical theory. To narrow this gap, the CPMs are subjected to a theoretical study here. It is asked to which extent the updating rules establish an appropriate dynamical model of intercellular interactions and what the principal behavior at different time scales characterizes. It is shown that the longtime behavior of a CPM is degenerate in the sense that the cells consecutively die out, independent of the specific interdependence structure that characterizes the model. While CPMs are naturally defined on finite, spatially bounded lattices, possible extensions to spatially unbounded systems are explored to assess to which extent spatio-temporal limit procedures can be applied to describe the emergent behavior at the tissue scale. To elucidate the mechanistic structure of CPMs, the model class is integrated into a general multiscale framework. It is shown that the central role of the surface fluctuations, which subsume several cellular and intercellular factors, entails substantial limitations for a CPM's exploitation both as a mechanistic and as a phenomenological model.

  15. Multiphysics pore-scale model for the rehydration of porous foods

    NARCIS (Netherlands)

    Sman, van der R.G.M.; Vergeldt, F.J.; As, van H.; Dalen, van G.; Voda, A.; Duynhoven, van J.P.M.

    2014-01-01

    In this paper we present a pore-scale model describing the multiphysics occurring during the rehydration of freeze-dried vegetables. This pore-scale model is part of a multiscale simulation model, which should explain the effect of microstructure and pre-treatments on the rehydration rate.

  16. Development of a Corrosion Potential Measuring System Based on the Generalization of DACS Physical Scale Modeling

    Directory of Open Access Journals (Sweden)

    Song Dalei

    2015-01-01

    Full Text Available A feasible method in evaluating the protection effect and corrosion state of marine cathodic protection (CP systems is collecting sufficient electric potential data around a submarine pipeline and then establishing the mapping relations between these data and corrosion states of pipelines. However, it is difficult for scientists and researchers to obtain those data accurately due to the harsh marine environments and absence of dedicated potential measurement device. In this paper, to alleviate these two problems, firstly, the theory of dimension and conductivity scaling (DACS physical scale modeling of marine impressed current cathodic protection (ICCP systems is generalized to marine CP systems, secondly, a potential measurement device is developed specially and analogue experiment is designed according to DACS physical scale modeling to verify the feasibility of the measuring system. The experimental results show that 92 percent of the measurement errors are less than 0.25mv, thereby providing an economical and feasible measuring system to get electric potential data around an actual submarine pipeline under CP.

  17. Calibration of the Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    Zyvoloski, G. A.

    2001-01-01

    The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)

  18. On effective temperature in network models of collective behavior

    International Nuclear Information System (INIS)

    Porfiri, Maurizio; Ariel, Gil

    2016-01-01

    Collective behavior of self-propelled units is studied analytically within the Vectorial Network Model (VNM), a mean-field approximation of the well-known Vicsek model. We propose a dynamical systems framework to study the stochastic dynamics of the VNM in the presence of general additive noise. We establish that a single parameter, which is a linear function of the circular mean of the noise, controls the macroscopic phase of the system—ordered or disordered. By establishing a fluctuation–dissipation relation, we posit that this parameter can be regarded as an effective temperature of collective behavior. The exact critical temperature is obtained analytically for systems with small connectivity, equivalent to low-density ensembles of self-propelled units. Numerical simulations are conducted to demonstrate the applicability of this new notion of effective temperature to the Vicsek model. The identification of an effective temperature of collective behavior is an important step toward understanding order–disorder phase transitions, informing consistent coarse-graining techniques and explaining the physics underlying the emergence of collective phenomena.

  19. Final report of the TRUE Block Scale project. 1. Characterisation and model development

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Peter; Byegaard, Johan [Geosigma AB, Uppsala (Sweden); Dershowitz, Bill; Doe, Thomas [Golder Associates Inc., Redmond, WA (United States); Hermanson, Jan [Golder Associates AB (Sweden); Meier, Peter [ANDRA, Chatenay-Malabry (France); Tullborg, Eva-Lena [Terralogica AB (Sweden); Winberg, Anders (ed.) [Conterra AB, Partille (Sweden)

    2002-04-01

    The general objectives of the TRUE Block Scale Project were to 1) increase understanding of tracer transport in a fracture network and to improve predictive capabilities, 2) assess the importance of tracer retention mechanisms (diffusion and sorption) in a fracture network, and 3) assess the link between flow and transport data as a means for predicting transport phenomena. During the period mid 1996 through mid 1999 a 200x250x100 m rock volume was characterised with the purpose of furnishing the basis for successful tracer experiments in a network of conductive structures in the block scale (10-100 m). In total five cored boreholes were drilled as part of the project in an iterative mode with a period of analysis following completion of characterisation, and with a strong component of inter activity with numerical modelling and experimental design, particularly towards the end of the characterisation. The combined use of pressure responses due to drilling and drilling records provided important early information/confirmation of the existence and location of a given structure. Verification of conductors identified from pressure responses was achieved through the use of various flow logging techniques. The usage of the Posiva difference flow log towards the end of the characterisation work enabled identification of discrete conductive fractures with a high resolution. Pressure responses collected during drilling were used to obtain a first assessment of connectivity between boreholes. The transient behaviour of the responses collected during cross-hole interference tests in packed-off boreholes were used to identify families of responses, which correlated well with the identified principal families of structures/fracture networks. The conductive geometry of the investigated rock block is made up of steeply dipping deterministic NW structures and NNW structures. High inflows in the boreholes were for the most part associated with geologically/geometrically identified

  20. Final report of the TRUE Block Scale project. 1. Characterisation and model development

    International Nuclear Information System (INIS)

    Andersson, Peter; Byegaard, Johan; Dershowitz, Bill; Doe, Thomas; Hermanson, Jan; Meier, Peter; Tullborg, Eva-Lena; Winberg, Anders

    2002-04-01

    The general objectives of the TRUE Block Scale Project were to 1) increase understanding of tracer transport in a fracture network and to improve predictive capabilities, 2) assess the importance of tracer retention mechanisms (diffusion and sorption) in a fracture network, and 3) assess the link between flow and transport data as a means for predicting transport phenomena. During the period mid 1996 through mid 1999 a 200x250x100 m rock volume was characterised with the purpose of furnishing the basis for successful tracer experiments in a network of conductive structures in the block scale (10-100 m). In total five cored boreholes were drilled as part of the project in an iterative mode with a period of analysis following completion of characterisation, and with a strong component of inter activity with numerical modelling and experimental design, particularly towards the end of the characterisation. The combined use of pressure responses due to drilling and drilling records provided important early information/confirmation of the existence and location of a given structure. Verification of conductors identified from pressure responses was achieved through the use of various flow logging techniques. The usage of the Posiva difference flow log towards the end of the characterisation work enabled identification of discrete conductive fractures with a high resolution. Pressure responses collected during drilling were used to obtain a first assessment of connectivity between boreholes. The transient behaviour of the responses collected during cross-hole interference tests in packed-off boreholes were used to identify families of responses, which correlated well with the identified principal families of structures/fracture networks. The conductive geometry of the investigated rock block is made up of steeply dipping deterministic NW structures and NNW structures. High inflows in the boreholes were for the most part associated with geologically/geometrically identified

  1. A high-resolution global-scale groundwater model

    Science.gov (United States)

    de Graaf, I. E. M.; Sutanudjaja, E. H.; van Beek, L. P. H.; Bierkens, M. F. P.

    2015-02-01

    Groundwater is the world's largest accessible source of fresh water. It plays a vital role in satisfying basic needs for drinking water, agriculture and industrial activities. During times of drought groundwater sustains baseflow to rivers and wetlands, thereby supporting ecosystems. Most global-scale hydrological models (GHMs) do not include a groundwater flow component, mainly due to lack of geohydrological data at the global scale. For the simulation of lateral flow and groundwater head dynamics, a realistic physical representation of the groundwater system is needed, especially for GHMs that run at finer resolutions. In this study we present a global-scale groundwater model (run at 6' resolution) using MODFLOW to construct an equilibrium water table at its natural state as the result of long-term climatic forcing. The used aquifer schematization and properties are based on available global data sets of lithology and transmissivities combined with the estimated thickness of an upper, unconfined aquifer. This model is forced with outputs from the land-surface PCRaster Global Water Balance (PCR-GLOBWB) model, specifically net recharge and surface water levels. A sensitivity analysis, in which the model was run with various parameter settings, showed that variation in saturated conductivity has the largest impact on the groundwater levels simulated. Validation with observed groundwater heads showed that groundwater heads are reasonably well simulated for many regions of the world, especially for sediment basins (R2 = 0.95). The simulated regional-scale groundwater patterns and flow paths demonstrate the relevance of lateral groundwater flow in GHMs. Inter-basin groundwater flows can be a significant part of a basin's water budget and help to sustain river baseflows, especially during droughts. Also, water availability of larger aquifer systems can be positively affected by additional recharge from inter-basin groundwater flows.

  2. A Pareto scale-inflated outlier model and its Bayesian analysis

    OpenAIRE

    Scollnik, David P. M.

    2016-01-01

    This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three wor...

  3. A Lagrangian dynamic subgrid-scale model turbulence

    Science.gov (United States)

    Meneveau, C.; Lund, T. S.; Cabot, W.

    1994-01-01

    A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.

  4. Two-scale modelling for hydro-mechanical damage

    International Nuclear Information System (INIS)

    Frey, J.; Chambon, R.; Dascalu, C.

    2010-01-01

    Document available in extended abstract form only. Excavation works for underground storage create a damage zone for the rock nearby and affect its hydraulics properties. This degradation, already observed by laboratory tests, can create a leading path for fluids. The micro fracture phenomenon, which occur at a smaller scale and affect the rock permeability, must be fully understood to minimize the transfer process. Many methods can be used in order to take into account the microstructure of heterogeneous materials. Among them a method has been developed recently. Instead of using a constitutive equation obtained by phenomenological considerations or by some homogenization techniques, the representative elementary volume (R.E.V.) is modelled as a structure and the links between a prescribed kinematics and the corresponding dual forces are deduced numerically. This yields the so called Finite Element square method (FE2). In a numerical point of view, a finite element model is used at the macroscopic level, and for each Gauss point, computations on the microstructure gives the usual results of a constitutive law. This numerical approach is now classical in order to properly model some materials such as composites and the efficiency of such numerical homogenization process has been shown, and allows numerical modelling of deformation processes associated with various micro-structural changes. The aim of this work is to describe trough such a method, damage of the rock with a two scale hydro-mechanical model. The rock damage at the macroscopic scale is directly link with an analysis on the microstructure. At the macroscopic scale a two phase's problem is studied. A solid skeleton is filled up by a filtrating fluid. It is necessary to enforce two balance equation and two mass conservation equations. A classical way to deal with such a problem is to work with the balance equation of the whole mixture, and the mass fluid conservation written in a weak form, the mass

  5. BLEVE overpressure: multi-scale comparison of blast wave modeling

    International Nuclear Information System (INIS)

    Laboureur, D.; Buchlin, J.M.; Rambaud, P.; Heymes, F.; Lapebie, E.

    2014-01-01

    BLEVE overpressure modeling has been already widely studied but only few validations including the scale effect have been made. After a short overview of the main models available in literature, a comparison is done with different scales of measurements, taken from previous studies or coming from experiments performed in the frame of this research project. A discussion on the best model to use in different cases is finally proposed. (authors)

  6. New time scale based k-epsilon model for near-wall turbulence

    Science.gov (United States)

    Yang, Z.; Shih, T. H.

    1993-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  7. Data, data everywhere: detecting spatial patterns in fine-scale ecological information collected across a continent

    Science.gov (United States)

    Kevin M. Potter; Frank H. Koch; Christopher M. Oswalt; Basil V. Iannone

    2016-01-01

    Context Fine-scale ecological data collected across broad regions are becoming increasingly available. Appropriate geographic analyses of these data can help identify locations of ecological concern. Objectives We present one such approach, spatial association of scalable hexagons (SASH), whichidentifies locations where ecological phenomena occur at greater...

  8. Development and testing of watershed-scale models for poorly drained soils

    Science.gov (United States)

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2005-01-01

    Watershed-scale hydrology and water quality models were used to evaluate the crrmulative impacts of land use and management practices on dowrzstream hydrology and nitrogen loading of poorly drained watersheds. Field-scale hydrology and nutrient dyyrutmics are predicted by DRAINMOD in both models. In the first model (DRAINMOD-DUFLOW), field-scale predictions are coupled...

  9. Calibration of Airframe and Occupant Models for Two Full-Scale Rotorcraft Crash Tests

    Science.gov (United States)

    Annett, Martin S.; Horta, Lucas G.; Polanco, Michael A.

    2012-01-01

    Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. Accelerations and kinematic data collected from the crash tests were compared to a system integrated finite element model of the test article. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the second full-scale crash test. This combination of heuristic and quantitative methods was used to identify modeling deficiencies, evaluate parameter importance, and propose required model changes. It is shown that the multi-dimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and co-pilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. This approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and test planning guidance. Complete crash simulations with validated finite element models can be used

  10. Coulomb-gas scaling, superfluid films, and the XY model

    International Nuclear Information System (INIS)

    Minnhagen, P.; Nylen, M.

    1985-01-01

    Coulomb-gas-scaling ideas are invoked as a link between the superfluid density of two-dimensional 4 He films and the XY model; the Coulomb-gas-scaling function epsilon(X) is extracted from experiments and is compared with Monte Carlo simulations of the XY model. The agreement is found to be excellent

  11. Considering the spatial-scale factor when modelling sustainable land management.

    Science.gov (United States)

    Bouma, Johan

    2015-04-01

    landscape and watershed scale ( 1:25.000 -1:50000) digital soil mapping can provide soil data for small grids that can be used for modeling, again through pedotransferfunctions. There is a risk, however, that digital mapping results in an isolated series of projects that don't increase the knowledge base on soil functionality, e.g.linking Taxonomic names ( such as soil series) to functionality, allowing predictions of soil behavior at new sites where certain soil series occur. We therefore suggest that aside from collecting 13 soil characteristics for each grid, as occurs in digital soil mapping, also the Taxonomic name of the representative soil in the grid is recorded. At spatial scales of 1:50000 and smaller, use of Taxonomic names becomes ever more attractive because at such small scales relations between soil types and landscape features become more pronounced. But in all cases, selection of procedures should not be science-based but based on the type of questions being asked including their level of generalization. These questions are quite different at the different spatial-scale levels and so should be the procedures.

  12. Model of cosmology and particle physics at an intermediate scale

    International Nuclear Information System (INIS)

    Bastero-Gil, M.; Di Clemente, V.; King, S. F.

    2005-01-01

    We propose a model of cosmology and particle physics in which all relevant scales arise in a natural way from an intermediate string scale. We are led to assign the string scale to the intermediate scale M * ∼10 13 GeV by four independent pieces of physics: electroweak symmetry breaking; the μ parameter; the axion scale; and the neutrino mass scale. The model involves hybrid inflation with the waterfall field N being responsible for generating the μ term, the right-handed neutrino mass scale, and the Peccei-Quinn symmetry breaking scale. The large scale structure of the Universe is generated by the lightest right-handed sneutrino playing the role of a coupled curvaton. We show that the correct curvature perturbations may be successfully generated providing the lightest right-handed neutrino is weakly coupled in the seesaw mechanism, consistent with sequential dominance

  13. Development of a multicriteria assessment model for ranking biomass feedstock collection and transportation systems.

    Science.gov (United States)

    Kumar, Amit; Sokhansanj, Shahab; Flynn, Peter C

    2006-01-01

    This study details multicriteria assessment methodology that integrates economic, social, environmental, and technical factors in order to rank alternatives for biomass collection and transportation systems. Ranking of biomass collection systems is based on cost of delivered biomass, quality of biomass supplied, emissions during collection, energy input to the chain operations, and maturity of supply system technologies. The assessment methodology is used to evaluate alternatives for collecting 1.8 x 10(6) dry t/yr based on assumptions made on performance of various assemblies of biomass collection systems. A proposed collection option using loafer/ stacker was shown to be the best option followed by ensiling and baling. Ranking of biomass transport systems is based on cost of biomass transport, emissions during transport, traffic congestion, and maturity of different technologies. At a capacity of 4 x 10(6) dry t/yr, rail transport was shown to be the best option, followed by truck transport and pipeline transport, respectively. These rankings depend highly on assumed maturity of technologies and scale of utilization. These may change if technologies such as loafing or ensiling (wet storage) methods are proved to be infeasible for large-scale collection systems.

  14. Geometrical scaling vs factorizable eikonal models

    CERN Document Server

    Kiang, D

    1975-01-01

    Among various theoretical explanations or interpretations for the experimental data on the differential cross-sections of elastic proton-proton scattering at CERN ISR, the following two seem to be most remarkable: A) the excellent agreement of the Chou-Yang model prediction of d sigma /dt with data at square root s=53 GeV, B) the general manifestation of geometrical scaling (GS). The paper confronts GS with eikonal models with factorizable opaqueness, with special emphasis on the Chou-Yang model. (12 refs).

  15. Multi-scale modeling of inter-granular fracture in UO2

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tonks, Michael R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, S. Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    A hierarchical multi-scale approach is pursued in this work to investigate the influence of porosity, pore and grain size on the intergranular brittle fracture in UO2. In this approach, molecular dynamics simulations are performed to obtain the fracture properties for different grain boundary types. A phase-field model is then utilized to perform intergranular fracture simulations of representative microstructures with different porosities, pore and grain sizes. In these simulations the grain boundary fracture properties obtained from molecular dynamics simulations are used. The responses from the phase-field fracture simulations are then fitted with a stress-based brittle fracture model usable at the engineering scale. This approach encapsulates three different length and time scales, and allows the development of microstructurally informed engineering scale model from properties evaluated at the atomistic scale.

  16. Scale dependence of effective media properties

    International Nuclear Information System (INIS)

    Tidwell, V.C.; VonDoemming, J.D.; Martinez, K.

    1992-01-01

    For problems where media properties are measured at one scale and applied at another, scaling laws or models must be used in order to define effective properties at the scale of interest. The accuracy of such models will play a critical role in predicting flow and transport through the Yucca Mountain Test Site given the sensitivity of these calculations to the input property fields. Therefore, a research programhas been established to gain a fundamental understanding of how properties scale with the aim of developing and testing models that describe scaling behavior in a quantitative-manner. Scaling of constitutive rock properties is investigated through physical experimentation involving the collection of suites of gas permeability data measured over a range of discrete scales. Also, various physical characteristics of property heterogeneity and the means by which the heterogeneity is measured and described are systematically investigated to evaluate their influence on scaling behavior. This paper summarizes the approach that isbeing taken toward this goal and presents the results of a scoping study that was conducted to evaluate the feasibility of the proposed research

  17. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  18. Classically scale-invariant B–L model and conformal gravity

    International Nuclear Information System (INIS)

    Oda, Ichiro

    2013-01-01

    We consider a coupling of conformal gravity to the classically scale-invariant B–L extended standard model which has been recently proposed as a phenomenologically viable model realizing the Coleman–Weinberg mechanism of breakdown of the electroweak symmetry. As in a globally scale-invariant dilaton gravity, it is also shown in a locally scale-invariant conformal gravity that without recourse to the Coleman–Weinberg mechanism, the B–L gauge symmetry is broken in the process of spontaneous symmetry breakdown of the local scale invariance (Weyl invariance) at the tree level and as a result the B–L gauge field becomes massive via the Higgs mechanism. As a bonus of conformal gravity, the massless dilaton field does not appear and the parameters in front of the non-minimal coupling of gravity are completely fixed in the present model. This observation clearly shows that the conformal gravity has a practical application even if the scalar field does not possess any dynamical degree of freedom owing to the local scale symmetry

  19. Scaling limit for the Derezi\\'nski-G\\'erard model

    OpenAIRE

    OHKUBO, Atsushi

    2010-01-01

    We consider a scaling limit for the Derezi\\'nski-G\\'erard model. We derive an effective potential by taking a scaling limit for the total Hamiltonian of the Derezi\\'nski-G\\'erard model. Our method to derive an effective potential is independent of whether or not the quantum field has a nonnegative mass. As an application of our theory developed in the present paper, we derive an effective potential of the Nelson model.

  20. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    Science.gov (United States)

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  1. Site-scale groundwater flow modelling of Beberg

    Energy Technology Data Exchange (ETDEWEB)

    Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden); Walker, D. [Duke Engineering and Services (United States); Hartley, L. [AEA Technology, Harwell (United Kingdom)

    1999-08-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of {epsilon}{sub f} 10{sup -4}, and a flow-wetted surface of a{sub r} = 1.0 m{sup 2}/(m{sup 3} rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10{sup -3} m/year. The median F-ratio is 5.6 x 10{sup 5} year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates

  2. Optogenetic stimulation of a meso-scale human cortical model

    Science.gov (United States)

    Selvaraj, Prashanth; Szeri, Andrew; Sleigh, Jamie; Kirsch, Heidi

    2015-03-01

    Neurological phenomena like sleep and seizures depend not only on the activity of individual neurons, but on the dynamics of neuron populations as well. Meso-scale models of cortical activity provide a means to study neural dynamics at the level of neuron populations. Additionally, they offer a safe and economical way to test the effects and efficacy of stimulation techniques on the dynamics of the cortex. Here, we use a physiologically relevant meso-scale model of the cortex to study the hypersynchronous activity of neuron populations during epileptic seizures. The model consists of a set of stochastic, highly non-linear partial differential equations. Next, we use optogenetic stimulation to control seizures in a hyperexcited cortex, and to induce seizures in a normally functioning cortex. The high spatial and temporal resolution this method offers makes a strong case for the use of optogenetics in treating meso scale cortical disorders such as epileptic seizures. We use bifurcation analysis to investigate the effect of optogenetic stimulation in the meso scale model, and its efficacy in suppressing the non-linear dynamics of seizures.

  3. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Report is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC

  4. Evaluation of a distributed catchment scale water balance model

    Science.gov (United States)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  5. a Model Study of Small-Scale World Map Generalization

    Science.gov (United States)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  6. Collecting verbal autopsies: improving and streamlining data collection processes using electronic tablets.

    Science.gov (United States)

    Flaxman, Abraham D; Stewart, Andrea; Joseph, Jonathan C; Alam, Nurul; Alam, Sayed Saidul; Chowdhury, Hafizur; Mooney, Meghan D; Rampatige, Rasika; Remolador, Hazel; Sanvictores, Diozele; Serina, Peter T; Streatfield, Peter Kim; Tallo, Veronica; Murray, Christopher J L; Hernandez, Bernardo; Lopez, Alan D; Riley, Ian Douglas

    2018-02-01

    There is increasing interest in using verbal autopsy to produce nationally representative population-level estimates of causes of death. However, the burden of processing a large quantity of surveys collected with paper and pencil has been a barrier to scaling up verbal autopsy surveillance. Direct electronic data capture has been used in other large-scale surveys and can be used in verbal autopsy as well, to reduce time and cost of going from collected data to actionable information. We collected verbal autopsy interviews using paper and pencil and using electronic tablets at two sites, and measured the cost and time required to process the surveys for analysis. From these cost and time data, we extrapolated costs associated with conducting large-scale surveillance with verbal autopsy. We found that the median time between data collection and data entry for surveys collected on paper and pencil was approximately 3 months. For surveys collected on electronic tablets, this was less than 2 days. For small-scale surveys, we found that the upfront costs of purchasing electronic tablets was the primary cost and resulted in a higher total cost. For large-scale surveys, the costs associated with data entry exceeded the cost of the tablets, so electronic data capture provides both a quicker and cheaper method of data collection. As countries increase verbal autopsy surveillance, it is important to consider the best way to design sustainable systems for data collection. Electronic data capture has the potential to greatly reduce the time and costs associated with data collection. For long-term, large-scale surveillance required by national vital statistical systems, electronic data capture reduces costs and allows data to be available sooner.

  7. Large scale collective modeling the final 'freeze out' stages of energetic heavy ion reactions and calculation of single particle measurables from these models

    Energy Technology Data Exchange (ETDEWEB)

    Nyiri, Agnes

    2005-07-01

    The goal of this PhD project was to develop the already existing, but far not complete Multi Module Model, specially focusing on the last module which describes the final stages of a heavy ion collision, as this module was still missing. The major original achievements summarized in this thesis correspond to the freeze out problem and calculation of an important measurable, the anisotropic flow. Summary of results: Freeze out: The importance of freeze out models is that they allow the evaluation of observables, which then can be compared to the experimental results. Therefore, it is crucial to find a realistic freeze out description, which is proved to be a non-trivial task. Recently, several kinetic freeze out models have been developed. Based on the earlier results, we have introduced new ideas and improved models, which may contribute to a more realistic description of the freeze out process. We have investigated the applicability of the Boltzmann Transport Equation (BTE) to describe dynamical freeze out. We have introduced the so-called Modified Boltzmann Transport Equation, which has a form very similar to that of the BTE, but takes into account those characteristics of the FO process which the BTE can not handle, e.g. the rapid change of the phase-space distribution function in the direction normal to the finite FO layer. We have shown that the main features of earlier ad hoc kinetic FO models can be obtained from BTE and MBTE. We have discussed the qualitative differences between the two approaches and presented some quantitative comparison as well. Since the introduced modification of the BTE makes it very difficult to solve the FO problem from the first principles, it is important to work out simplified phenomenological models, which can explain the basic features of the FO process. We have built and discussed such a model. Flow analysis: The other main subject of this thesis has been the collective flow in heavy ion collisions. Collective flow from ultra

  8. The German power market. Data collection for model analysis

    Energy Technology Data Exchange (ETDEWEB)

    Munksgaard, J.; Alsted Pedersen, K.; Ramskov, J.

    2000-09-01

    In the present project the market scenario for analysing market imperfections has been the Northern European power market, i.e. a market including Germany as well. Consequently, one of the tasks in the project has been to collect data for Germany in order to develop the empirical basis of the ELEPHANT model. In that perspective the aim of this report is to document the data collected for Gemany, to specify the data sources used and further to lay stress on the assumptions which have been made when data have not been available. By doing so, transparency in model results is improved. Further, a basis for discussing the quality of data as well as a framework for future revisions and updating of data have been established. The data collected for Germany have been given by the exogenous variables defined by the ELEPHANT model. In that way data collection is a priori given by the specification of the model. The model includes more than 30 exogenous variables specified at a very detailed level. These variables include among others data on energy demand, detailed power production data and data on energy taxes and CO{sub 2} emission targets. This points to the fact that many kinds of data sources have been used. However, due to lack of data sources not all relevant data have been collected. One area in which lack of data has been significant is demand reactions to changes in energy prices, i.e. the different kinds of demand elasticities used in the production and consumer utility functions in the model. Concerning elasticities for German demand reactions no data sources have been available at all. Another area of data problems is combined heat and power production (so-called CHP production), in which only very aggregated data have been available. Lack of data or poor quality of data (e.g., data not up to date or data not detailed enough) has led to the use of appropriate assumptions and short cuts in order to establish the entire data basis for the model. We describe the

  9. The German power market. Data collection for model analysis

    International Nuclear Information System (INIS)

    Munksgaard, J.; Alsted Pedersen, K.; Ramskov, J.

    2000-09-01

    In the present project the market scenario for analysing market imperfections has been the Northern European power market, i.e. a market including Germany as well. Consequently, one of the tasks in the project has been to collect data for Germany in order to develop the empirical basis of the ELEPHANT model. In that perspective the aim of this report is to document the data collected for Gemany, to specify the data sources used and further to lay stress on the assumptions which have been made when data have not been available. By doing so, transparency in model results is improved. Further, a basis for discussing the quality of data as well as a framework for future revisions and updating of data have been established. The data collected for Germany have been given by the exogenous variables defined by the ELEPHANT model. In that way data collection is a priori given by the specification of the model. The model includes more than 30 exogenous variables specified at a very detailed level. These variables include among others data on energy demand, detailed power production data and data on energy taxes and CO 2 emission targets. This points to the fact that many kinds of data sources have been used. However, due to lack of data sources not all relevant data have been collected. One area in which lack of data has been significant is demand reactions to changes in energy prices, i.e. the different kinds of demand elasticities used in the production and consumer utility functions in the model. Concerning elasticities for German demand reactions no data sources have been available at all. Another area of data problems is combined heat and power production (so-called CHP production), in which only very aggregated data have been available. Lack of data or poor quality of data (e.g., data not up to date or data not detailed enough) has led to the use of appropriate assumptions and short cuts in order to establish the entire data basis for the model. We describe the

  10. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    Science.gov (United States)

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of

  11. Pelamis wave energy converter. Verification of full-scale control using a 7th scale model

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    The Pelamis Wave Energy Converter is a new concept for converting wave energy for several applications including generation of electric power. The machine is flexibly moored and swings to meet the water waves head-on. The system is semi-submerged and consists of cylindrical sections linked by hinges. The mechanical operation is described in outline. A one-seventh scale model was built and tested and the outcome was sufficiently successful to warrant the building of a full-scale prototype. In addition, a one-twentieth scale model was built and has contributed much to the research programme. The work is supported financially by the DTI.

  12. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  13. Scaling and percolation in the small-world network model

    Energy Technology Data Exchange (ETDEWEB)

    Newman, M. E. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States); Watts, D. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States)

    1999-12-01

    In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society.

  14. Scaling and percolation in the small-world network model

    International Nuclear Information System (INIS)

    Newman, M. E. J.; Watts, D. J.

    1999-01-01

    In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society

  15. A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation

    Science.gov (United States)

    Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.

    2016-12-01

    Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.

  16. Major shell centroids in the symplectic collective model

    International Nuclear Information System (INIS)

    Draayer, J.P.; Rosensteel, G.; Tulane Univ., New Orleans, LA

    1983-01-01

    Analytic expressions are given for the major shell centroids of the collective potential V(#betta#, #betta#) and the shape observable #betta# 2 in the Sp(3,R) symplectic model. The tools of statistical spectroscopy are shown to be useful, firstly, in translating a requirement that the underlying shell structure be preserved into constraints on the parameters of the collective potential and, secondly, in giving a reasonable estimate for a truncation of the infinite dimensional symplectic model space from experimental B(E2) transition strengths. Results based on the centroid information are shown to compare favorably with results from exact calculations in the case of 20 Ne. (orig.)

  17. Perceived Discrimination and Subjective Well-being in Chinese Migrant Adolescents: Collective and Personal Self-esteem As Mediators.

    Science.gov (United States)

    Jia, Xuji; Liu, Xia; Shi, Baoguo

    2017-01-01

    This study aimed to examine whether collective and personal self-esteem serve as mediators in the relationship between perceived discrimination and subjective well-being among Chinese rural-to-urban migrant adolescents. Six hundred and ninety-two adolescents completed a perceived discrimination scale, a collective self-esteem scale, a personal self-esteem scale, and a subjective well-being scale. Structural equation modeling was used to test the mediation hypothesis. The analysis indicated that both collective and personal self-esteem partially mediated the relationship between perceived discrimination and subjective well-being. The final model also revealed a significant path from perceived discrimination through collective and personal self-esteem to subjective well-being. These findings contribute to the understanding of the complicated relationships among perceived discrimination, collective and personal self-esteem, and subjective well-being. The findings suggest that collective and personal self-esteem are possible targets for interventions aimed at improving subjective well-being. Programs to nurture both the personal and collective self-esteem of migrant adolescents may help to weaken the negative relationships between perceived discrimination and subjective well-being.

  18. Perceived Discrimination and Subjective Well-being in Chinese Migrant Adolescents: Collective and Personal Self-esteem As Mediators

    Science.gov (United States)

    Jia, Xuji; Liu, Xia; Shi, Baoguo

    2017-01-01

    This study aimed to examine whether collective and personal self-esteem serve as mediators in the relationship between perceived discrimination and subjective well-being among Chinese rural-to-urban migrant adolescents. Six hundred and ninety-two adolescents completed a perceived discrimination scale, a collective self-esteem scale, a personal self-esteem scale, and a subjective well-being scale. Structural equation modeling was used to test the mediation hypothesis. The analysis indicated that both collective and personal self-esteem partially mediated the relationship between perceived discrimination and subjective well-being. The final model also revealed a significant path from perceived discrimination through collective and personal self-esteem to subjective well-being. These findings contribute to the understanding of the complicated relationships among perceived discrimination, collective and personal self-esteem, and subjective well-being. The findings suggest that collective and personal self-esteem are possible targets for interventions aimed at improving subjective well-being. Programs to nurture both the personal and collective self-esteem of migrant adolescents may help to weaken the negative relationships between perceived discrimination and subjective well-being. PMID:28769850

  19. Perceived Discrimination and Subjective Well-being in Chinese Migrant Adolescents: Collective and Personal Self-esteem As Mediators

    Directory of Open Access Journals (Sweden)

    Xuji Jia

    2017-07-01

    Full Text Available This study aimed to examine whether collective and personal self-esteem serve as mediators in the relationship between perceived discrimination and subjective well-being among Chinese rural-to-urban migrant adolescents. Six hundred and ninety-two adolescents completed a perceived discrimination scale, a collective self-esteem scale, a personal self-esteem scale, and a subjective well-being scale. Structural equation modeling was used to test the mediation hypothesis. The analysis indicated that both collective and personal self-esteem partially mediated the relationship between perceived discrimination and subjective well-being. The final model also revealed a significant path from perceived discrimination through collective and personal self-esteem to subjective well-being. These findings contribute to the understanding of the complicated relationships among perceived discrimination, collective and personal self-esteem, and subjective well-being. The findings suggest that collective and personal self-esteem are possible targets for interventions aimed at improving subjective well-being. Programs to nurture both the personal and collective self-esteem of migrant adolescents may help to weaken the negative relationships between perceived discrimination and subjective well-being.

  20. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    Science.gov (United States)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  1. State-of-the-Art Report on Multi-scale Modelling of Nuclear Fuels

    International Nuclear Information System (INIS)

    Bartel, T.J.; Dingreville, R.; Littlewood, D.; Tikare, V.; Bertolus, M.; Blanc, V.; Bouineau, V.; Carlot, G.; Desgranges, C.; Dorado, B.; Dumas, J.C.; Freyss, M.; Garcia, P.; Gatt, J.M.; Gueneau, C.; Julien, J.; Maillard, S.; Martin, G.; Masson, R.; Michel, B.; Piron, J.P.; Sabathier, C.; Skorek, R.; Toffolon, C.; Valot, C.; Van Brutzel, L.; Besmann, Theodore M.; Chernatynskiy, A.; Clarno, K.; Gorti, S.B.; Radhakrishnan, B.; Devanathan, R.; Dumont, M.; Maugis, P.; El-Azab, A.; Iglesias, F.C.; Lewis, B.J.; Krack, M.; Yun, Y.; Kurata, M.; Kurosaki, K.; Largenton, R.; Lebensohn, R.A.; Malerba, L.; Oh, J.Y.; Phillpot, S.R.; Tulenko, J. S.; Rachid, J.; Stan, M.; Sundman, B.; Tonks, M.R.; Williamson, R.; Van Uffelen, P.; Welland, M.J.; Valot, Carole; Stan, Marius; Massara, Simone; Tarsi, Reka

    2015-10-01

    The Nuclear Science Committee (NSC) of the Nuclear Energy Agency (NEA) has undertaken an ambitious programme to document state-of-the-art of modelling for nuclear fuels and structural materials. The project is being performed under the Working Party on Multi-Scale Modelling of Fuels and Structural Material for Nuclear Systems (WPMM), which has been established to assess the scientific and engineering aspects of fuels and structural materials, describing multi-scale models and simulations as validated predictive tools for the design of nuclear systems, fuel fabrication and performance. The WPMM's objective is to promote the exchange of information on models and simulations of nuclear materials, theoretical and computational methods, experimental validation and related topics. It also provides member countries with up-to-date information, shared data, models, and expertise. The goal is also to assess needs for improvement and address them by initiating joint efforts. The WPMM reviews and evaluates multi-scale modelling and simulation techniques currently employed in the selection of materials used in nuclear systems. It serves to provide advice to the nuclear community on the developments needed to meet the requirements of modelling for the design of different nuclear systems. The original WPMM mandate had three components (Figure 1), with the first component currently completed, delivering a report on the state-of-the-art of modelling of structural materials. The work on modelling was performed by three expert groups, one each on Multi-Scale Modelling Methods (M3), Multi-Scale Modelling of Fuels (M2F) and Structural Materials Modelling (SMM). WPMM is now composed of three expert groups and two task forces providing contributions on multi-scale methods, modelling of fuels and modelling of structural materials. This structure will be retained, with the addition of task forces as new topics are developed. The mandate of the Expert Group on Multi-Scale Modelling of

  2. Multi-Scale Modeling of Microstructural Evolution in Structural Metallic Systems

    Science.gov (United States)

    Zhao, Lei

    Metallic alloys are a widely used class of structural materials, and the mechanical properties of these alloys are strongly dependent on the microstructure. Therefore, the scientific design of metallic materials with superior mechanical properties requires the understanding of the microstructural evolution. Computational models and simulations offer a number of advantages over experimental techniques in the prediction of microstructural evolution, because they can allow studies of microstructural evolution in situ, i.e., while the material is mechanically loaded (meso-scale simulations), and bring atomic-level insights into the microstructure (atomistic simulations). In this thesis, we applied a multi-scale modeling approach to study the microstructural evolution in several metallic systems, including polycrystalline materials and metallic glasses (MGs). Specifically, for polycrystalline materials, we developed a coupled finite element model that combines phase field method and crystal plasticity theory to study the plasticity effect on grain boundary (GB) migration. Our model is not only coupled strongly (i.e., we include plastic driving force on GB migration directly) and concurrently (i.e., coupled equations are solved simultaneously), but also it qualitatively captures such phenomena as the dislocation absorption by mobile GBs. The developed model provides a tool to study the microstructural evolution in plastically deformed metals and alloys. For MGs, we used molecular dynamics (MD) simulations to investigate the nucleation kinetics in the primary crystallization in Al-Sm system. We calculated the time-temperature-transformation curves for low Sm concentrations, from which the strong suppressing effect of Sm solute on Al nucleation and its influencing mechanism are revealed. Also, through the comparative analysis of both Al attachment and Al diffusion in MGs, it has been found that the nucleation kinetics is controlled by interfacial attachment of Al, and that

  3. Income Groups, Social Capital, and Collective Action on Small-Scale Irrigation Facilities

    NARCIS (Netherlands)

    Miao, Shanshan; Heijman, Wim; Zhu, Xueqin; Qiao, Dan; Lu, Qian

    2018-01-01

    This article examines whether relationships between social capital characteristics and the willingness of farmers to cooperate in collective action is moderated by the farmers' income level. We employed a structural equation model to analyze the influence of social capital components (social

  4. Large scale collective modeling the final 'freeze out' stages of energetic heavy ion reactions and calculation of single particle measurables from these models

    International Nuclear Information System (INIS)

    Nyiri, Agnes

    2005-01-01

    The goal of this PhD project was to develop the already existing, but far not complete Multi Module Model, specially focusing on the last module which describes the final stages of a heavy ion collision, as this module was still missing. The major original achievements summarized in this thesis correspond to the freeze out problem and calculation of an important measurable, the anisotropic flow. Summary of results: Freeze out: The importance of freeze out models is that they allow the evaluation of observables, which then can be compared to the experimental results. Therefore, it is crucial to find a realistic freeze out description, which is proved to be a non-trivial task. Recently, several kinetic freeze out models have been developed. Based on the earlier results, we have introduced new ideas and improved models, which may contribute to a more realistic description of the freeze out process. We have investigated the applicability of the Boltzmann Transport Equation (BTE) to describe dynamical freeze out. We have introduced the so-called Modified Boltzmann Transport Equation, which has a form very similar to that of the BTE, but takes into account those characteristics of the FO process which the BTE can not handle, e.g. the rapid change of the phase-space distribution function in the direction normal to the finite FO layer. We have shown that the main features of earlier ad hoc kinetic FO models can be obtained from BTE and MBTE. We have discussed the qualitative differences between the two approaches and presented some quantitative comparison as well. Since the introduced modification of the BTE makes it very difficult to solve the FO problem from the first principles, it is important to work out simplified phenomenological models, which can explain the basic features of the FO process. We have built and discussed such a model. Flow analysis: The other main subject of this thesis has been the collective flow in heavy ion collisions. Collective flow from ultra

  5. Reference Priors for the General Location-Scale Model

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo 1992) is applied to multivariate location-scale models with any regular sampling density, where we establish the irrelevance of the usual assumption of Normal sampling if our interest is in either the location or the scale. This result immediately

  6. Model Scaling of Hydrokinetic Ocean Renewable Energy Systems

    Science.gov (United States)

    von Ellenrieder, Karl; Valentine, William

    2013-11-01

    Numerical simulations are performed to validate a non-dimensional dynamic scaling procedure that can be applied to subsurface and deeply moored systems, such as hydrokinetic ocean renewable energy devices. The prototype systems are moored in water 400 m deep and include: subsurface spherical buoys moored in a shear current and excited by waves; an ocean current turbine excited by waves; and a deeply submerged spherical buoy in a shear current excited by strong current fluctuations. The corresponding model systems, which are scaled based on relative water depths of 10 m and 40 m, are also studied. For each case examined, the response of the model system closely matches the scaled response of the corresponding full-sized prototype system. The results suggest that laboratory-scale testing of complete ocean current renewable energy systems moored in a current is possible. This work was supported by the U.S. Southeast National Marine Renewable Energy Center (SNMREC).

  7. Scale-free, axisymmetry galaxy models with little angular momentum

    International Nuclear Information System (INIS)

    Richstone, D.O.

    1980-01-01

    Two scale-free models of elliptical galaxies are constructed using a self-consistent field approach developed by Schwarschild. Both models have concentric, oblate spheroidal, equipotential surfaces, with a logarithmic potential dependence on central distance. The axial ratio of the equipotential surfaces is 4:3, and the extent ratio of density level surfaces id 2.5:1 (corresponding to an E6 galaxy). Each model satisfies the Poisson and steady state Boltzmann equaion for time scales of order 100 galactic years

  8. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  9. A Microscopic Quantal Model for Nuclear Collective Rotation

    International Nuclear Information System (INIS)

    Gulshani, P.

    2007-01-01

    A microscopic, quantal model to describe nuclear collective rotation in two dimensions is derived from the many-nucleon Schrodinger equation. The Schrodinger equation is transformed to a body-fixed frame to decompose the Hamiltonian into a sum of intrinsic and rotational components plus a Coriolis-centrifugal coupling term. This Hamiltonian (H) is expressed in terms of space-fixed-frame particle coordinates and momenta by using commutator of H with a rotation angle. A unified-rotational-model type wavefunction is used to obtain an intrinsic Schrodinger equation in terms of angular momentum quantum number and two-body operators. A Hartree-Fock mean-field representation of this equation is then obtained and, by means of a unitary transformation, is reduced to a form resembling that of the conventional semi-classical cranking model when exchange terms and intrinsic spurious collective excitation are ignored

  10. Spatio-temporal correlations in models of collective motion ruled by different dynamical laws.

    Science.gov (United States)

    Cavagna, Andrea; Conti, Daniele; Giardina, Irene; Grigera, Tomas S; Melillo, Stefania; Viale, Massimiliano

    2016-11-15

    Information transfer is an essential factor in determining the robustness of biological systems with distributed control. The most direct way to study the mechanisms ruling information transfer is to experimentally observe the propagation across the system of a signal triggered by some perturbation. However, this method may be inefficient for experiments in the field, as the possibilities to perturb the system are limited and empirical observations must rely on natural events. An alternative approach is to use spatio-temporal correlations to probe the information transfer mechanism directly from the spontaneous fluctuations of the system, without the need to have an actual propagating signal on record. Here we test this method on models of collective behaviour in their deeply ordered phase by using ground truth data provided by numerical simulations in three dimensions. We compare two models characterized by very different dynamical equations and information transfer mechanisms: the classic Vicsek model, describing an overdamped noninertial dynamics and the inertial spin model, characterized by an underdamped inertial dynamics. By using dynamic finite-size scaling, we show that spatio-temporal correlations are able to distinguish unambiguously the diffusive information transfer mechanism of the Vicsek model from the linear mechanism of the inertial spin model.

  11. Macro-scale turbulence modelling for flows in porous media

    International Nuclear Information System (INIS)

    Pinson, F.

    2006-03-01

    - This work deals with the macroscopic modeling of turbulence in porous media. It concerns heat exchangers, nuclear reactors as well as urban flows, etc. The objective of this study is to describe in an homogenized way, by the mean of a spatial average operator, turbulent flows in a solid matrix. In addition to this first operator, the use of a statistical average operator permits to handle the pseudo-aleatory character of turbulence. The successive application of both operators allows us to derive the balance equations of the kind of flows under study. Two major issues are then highlighted, the modeling of dispersion induced by the solid matrix and the turbulence modeling at a macroscopic scale (Reynolds tensor and turbulent dispersion). To this aim, we lean on the local modeling of turbulence and more precisely on the k - ε RANS models. The methodology of dispersion study, derived thanks to the volume averaging theory, is extended to turbulent flows. Its application includes the simulation, at a microscopic scale, of turbulent flows within a representative elementary volume of the porous media. Applied to channel flows, this analysis shows that even within the turbulent regime, dispersion remains one of the dominating phenomena within the macro-scale modeling framework. A two-scale analysis of the flow allows us to understand the dominating role of the drag force in the kinetic energy transfers between scales. Transfers between the mean part and the turbulent part of the flow are formally derived. This description significantly improves our understanding of the issue of macroscopic modeling of turbulence and leads us to define the sub-filter production and the wake dissipation. A f - f - w >f model is derived. It is based on three balance equations for the turbulent kinetic energy, the viscous dissipation and the wake dissipation. Furthermore, a dynamical predictor for the friction coefficient is proposed. This model is then successfully applied to the study of

  12. Application and comparison of the SCS-CN-based rainfall-runoff model in meso-scale watershed and field scale

    Science.gov (United States)

    Luo, L.; Wang, Z.

    2010-12-01

    Soil Conservation Service Curve Number (SCS-CN) based hydrologic model, has widely been used for agricultural watersheds in recent years. However, there will be relative error when applying it due to differentiation of geographical and climatological conditions. This paper introduces a more adaptable and propagable model based on the modified SCS-CN method, which specializes into two different scale cases of research regions. Combining the typical conditions of the Zhanghe irrigation district in southern part of China, such as hydrometeorologic conditions and surface conditions, SCS-CN based models were established. The Xinbu-Qiao River basin (area =1207 km2) and the Tuanlin runoff test area (area =2.87 km2)were taken as the study areas of basin scale and field scale in Zhanghe irrigation district. Applications were extended from ordinary meso-scale watershed to field scale in Zhanghe paddy field-dominated irrigated . Based on actual measurement data of land use, soil classification, hydrology and meteorology, quantitative evaluation and modifications for two coefficients, i.e. preceding loss and runoff curve, were proposed with corresponding models, table of CN values for different landuse and AMC(antecedent moisture condition) grading standard fitting for research cases were proposed. The simulation precision was increased by putting forward a 12h unit hydrograph of the field area, and 12h unit hydrograph were simplified. Comparison between different scales show that it’s more effectively to use SCS-CN model on field scale after parameters calibrated in basin scale These results can help discovering the rainfall-runoff rule in the district. Differences of established SCS-CN model's parameters between the two study regions are also considered. Varied forms of landuse and impacts of human activities were the important factors which can impact the rainfall-runoff relations in Zhanghe irrigation district.

  13. Multi-scale habitat selection modeling: A review and outlook

    Science.gov (United States)

    Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman

    2016-01-01

    Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.

  14. Scaling of Sediment Dynamics in a Reach-Scale Laboratory Model of a Sand-Bed Stream with Riparian Vegetation

    Science.gov (United States)

    Gorrick, S.; Rodriguez, J. F.

    2011-12-01

    A movable bed physical model was designed in a laboratory flume to simulate both bed and suspended load transport in a mildly sinuous sand-bed stream. Model simulations investigated the impact of different vegetation arrangements along the outer bank to evaluate rehabilitation options. Preserving similitude in the 1:16 laboratory model was very important. In this presentation the scaling approach, as well as the successes and challenges of the strategy are outlined. Firstly a near-bankfull flow event was chosen for laboratory simulation. In nature, bankfull events at the field site deposit new in-channel features but cause only small amounts of bank erosion. Thus the fixed banks in the model were not a drastic simplification. Next, and as in other studies, the flow velocity and turbulence measurements were collected in separate fixed bed experiments. The scaling of flow in these experiments was simply maintained by matching the Froude number and roughness levels. The subsequent movable bed experiments were then conducted under similar hydrodynamic conditions. In nature, the sand-bed stream is fairly typical; in high flows most sediment transport occurs in suspension and migrating dunes cover the bed. To achieve similar dynamics in the model equivalent values of the dimensionless bed shear stress and the particle Reynolds number were important. Close values of the two dimensionless numbers were achieved with lightweight sediments (R=0.3) including coal and apricot pips with a particle size distribution similar to that of the field site. Overall the moveable bed experiments were able to replicate the dominant sediment dynamics present in the stream during a bankfull flow and yielded relevant information for the analysis of the effects of riparian vegetation. There was a potential conflict in the strategy, in that grain roughness was exaggerated with respect to nature. The advantage of this strategy is that although grain roughness is exaggerated, the similarity of

  15. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    Science.gov (United States)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  16. 1/3-scale model testing program

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.

    1989-01-01

    This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system

  17. The use of scale models in impact testing

    International Nuclear Information System (INIS)

    Donelan, P.J.; Dowling, A.R.

    1985-01-01

    Theoretical analysis, component testing and model flask testing are employed to investigate the validity of scale models for demonstrating the behaviour of Magnox flasks under impact conditions. Model testing is shown to be a powerful and convenient tool provided adequate care is taken with detail design and manufacture of models and with experimental control. (author)

  18. A model-based framework for incremental scale-up of wastewater treatment processes

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Sin, Gürkan

    Scale-up is traditionally done following specific ratios or rules of thumb which do not lead to optimal results. We present a generic framework to assist in scale-up of wastewater treatment processes based on multiscale modelling, multiobjective optimisation and a validation of the model at the new...... large scale. The framework is illustrated by the scale-up of a complete autotropic nitrogen removal process. The model based multiobjective scaleup offers a promising improvement compared to the rule of thumbs based emprical scale up rules...

  19. Complex scaling in the cluster model

    International Nuclear Information System (INIS)

    Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.

    1987-01-01

    To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs

  20. Ares I Scale Model Acoustic Test Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas

    2011-01-01

    Ares I Scale Model Acoustic Test (ASMAT) is a 5% scale model test of the Ares I vehicle, launch pad and support structures conducted at MSFC to verify acoustic and ignition environments and evaluate water suppression systems Test design considerations 5% measurements must be scaled to full scale requiring high frequency measurements Users had different frequencies of interest Acoustics: 200 - 2,000 Hz full scale equals 4,000 - 40,000 Hz model scale Ignition Transient: 0 - 100 Hz full scale equals 0 - 2,000 Hz model scale Environment exposure Weather exposure: heat, humidity, thunderstorms, rain, cold and snow Test environments: Plume impingement heat and pressure, and water deluge impingement Several types of sensors were used to measure the environments Different instrument mounts were used according to the location and exposure to the environment This presentation addresses the observed effects of the selected sensors and mount design on the acoustic and pressure measurements

  1. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    Energy Technology Data Exchange (ETDEWEB)

    E. Gonnenthal; N. Spyoher

    2001-02-05

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [153447]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: (1) Performance Assessment (PA); (2) Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); (3) UZ Flow and Transport Process Model Report (PMR); and (4) Near-Field Environment (NFE) PMR. The work scope for this activity is presented in the TWPs cited above, and summarized as follows: continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data

  2. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    International Nuclear Information System (INIS)

    Sonnenthale, E.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [1534471]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: Performance Assessment (PA); Near-Field Environment (NFE) PMR; Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); and UZ Flow and Transport Process Model Report (PMR). The work scope for this activity is presented in the TWPs cited above, and summarized as follows: Continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are

  3. Validity of thermally-driven small-scale ventilated filling box models

    Science.gov (United States)

    Partridge, Jamie L.; Linden, P. F.

    2013-11-01

    The majority of previous work studying building ventilation flows at laboratory scale have used saline plumes in water. The production of buoyancy forces using salinity variations in water allows dynamic similarity between the small-scale models and the full-scale flows. However, in some situations, such as including the effects of non-adiabatic boundaries, the use of a thermal plume is desirable. The efficacy of using temperature differences to produce buoyancy-driven flows representing natural ventilation of a building in a small-scale model is examined here, with comparison between previous theoretical and new, heat-based, experiments.

  4. Modelling of particles collection by vented limiters

    International Nuclear Information System (INIS)

    Tsitrone, E.; Pegourie, B.; Granata, G.

    1995-01-01

    This document deals with the use of vented limiters for the collection of neutral particles in Tore Supra. The model developed for experiments is presented together with its experimental validation. Some possible improvements to the present limiter are also proposed. (TEC). 5 refs., 3 figs

  5. Multi-scale modeling of carbon capture systems

    Energy Technology Data Exchange (ETDEWEB)

    Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO2 capture. The sorbent model includes a detailed treatment of transport and amine-CO2- H2O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.

  6. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    International Nuclear Information System (INIS)

    Cruz, Roberto de la; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-01-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction–diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction–diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge

  7. Collectivity in heavy nuclei in the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Özen, C.; Alhassid, Y.; Nakada, H.

    2014-01-01

    The microscopic description of collectivity in heavy nuclei in the framework of the configuration-interaction shell model has been a major challenge. The size of the model space required for the description of heavy nuclei prohibits the use of conventional diagonalization methods. We have overcome this difficulty by using the shell model Monte Carlo (SMMC) method, which can treat model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We identify a thermal observable that can distinguish between vibrational and rotational collectivity and use it to describe the crossover from vibrational to rotational collectivity in families of even-even rare-earth isotopes. We calculate the state densities in these nuclei and find them to be in close agreement with experimental data. We also calculate the collective enhancement factors of the corresponding level densities and find that their decay with excitation energy is correlated with the pairing and shape phase transitions. (author)

  8. Examining the Variability of Sleep Patterns during Treatment for Chronic Insomnia: Application of a Location-Scale Mixed Model.

    Science.gov (United States)

    Ong, Jason C; Hedeker, Donald; Wyatt, James K; Manber, Rachel

    2016-06-15

    The purpose of this study was to introduce a novel statistical technique called the location-scale mixed model that can be used to analyze the mean level and intra-individual variability (IIV) using longitudinal sleep data. We applied the location-scale mixed model to examine changes from baseline in sleep efficiency on data collected from 54 participants with chronic insomnia who were randomized to an 8-week Mindfulness-Based Stress Reduction (MBSR; n = 19), an 8-week Mindfulness-Based Therapy for Insomnia (MBTI; n = 19), or an 8-week self-monitoring control (SM; n = 16). Sleep efficiency was derived from daily sleep diaries collected at baseline (days 1-7), early treatment (days 8-21), late treatment (days 22-63), and post week (days 64-70). The behavioral components (sleep restriction, stimulus control) were delivered during late treatment in MBTI. For MBSR and MBTI, the pre-to-post change in mean levels of sleep efficiency were significantly larger than the change in mean levels for the SM control, but the change in IIV was not significantly different. During early and late treatment, MBSR showed a larger increase in mean levels of sleep efficiency and a larger decrease in IIV relative to the SM control. At late treatment, MBTI had a larger increase in the mean level of sleep efficiency compared to SM, but the IIV was not significantly different. The location-scale mixed model provides a two-dimensional analysis on the mean and IIV using longitudinal sleep diary data with the potential to reveal insights into treatment mechanisms and outcomes. © 2016 American Academy of Sleep Medicine.

  9. Modeling collective animal behavior with a cognitive perspective: a methodological framework.

    Directory of Open Access Journals (Sweden)

    Sebastian Weitz

    Full Text Available The last decades have seen an increasing interest in modeling collective animal behavior. Some studies try to reproduce as accurately as possible the collective dynamics and patterns observed in several animal groups with biologically plausible, individual behavioral rules. The objective is then essentially to demonstrate that the observed collective features may be the result of self-organizing processes involving quite simple individual behaviors. Other studies concentrate on the objective of establishing or enriching links between collective behavior researches and cognitive or physiological ones, which then requires that each individual rule be carefully validated. Here we discuss the methodological consequences of this additional requirement. Using the example of corpse clustering in ants, we first illustrate that it may be impossible to discriminate among alternative individual rules by considering only observational data collected at the group level. Six individual behavioral models are described: They are clearly distinct in terms of individual behaviors, they all reproduce satisfactorily the collective dynamics and distribution patterns observed in experiments, and we show theoretically that it is strictly impossible to discriminate two of these models even in the limit of an infinite amount of data whatever the accuracy level. A set of methodological steps are then listed and discussed as practical ways to partially overcome this problem. They involve complementary experimental protocols specifically designed to address the behavioral rules successively, conserving group-level data for the overall model validation. In this context, we highlight the importance of maintaining a sharp distinction between model enunciation, with explicit references to validated biological concepts, and formal translation of these concepts in terms of quantitative state variables and fittable functional dependences. Illustrative examples are provided of the

  10. Analysis of the Professional Choice Self-Efficacy Scale Using the Rasch-Andrich Rating Scale Model

    Science.gov (United States)

    Ambiel, Rodolfo A. M.; Noronha, Ana Paula Porto; de Francisco Carvalho, Lucas

    2015-01-01

    The aim of this research was to analyze the psychometrics properties of the professional choice self-efficacy scale (PCSES), using the Rasch-Andrich rating scale model. The PCSES assesses four factors: self-appraisal, gathering occupational information, practical professional information search and future planning. Participants were 883 Brazilian…

  11. Fixing the EW scale in supersymmetric models after the Higgs discovery

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    TeV-scale supersymmetry was originally introduced to solve the hierarchy problem and therefore fix the electroweak (EW) scale in the presence of quantum corrections. Numerical methods testing the SUSY models often report a good likelihood L (or chi^2=-2ln L) to fit the data {\\it including} the EW scale itself (m_Z^0) with a {\\it simultaneously} large fine-tuning i.e. a large variation of this scale under a small variation of the SUSY parameters. We argue that this is inconsistent and we identify the origin of this problem. Our claim is that the likelihood (or chi^2) to fit the data that is usually reported in such models does not account for the chi^2 cost of fixing the EW scale. When this constraint is implemented, the likelihood (or chi^2) receives a significant correction (delta_chi^2) that worsens the current data fits of SUSY models. We estimate this correction for the models: constrained MSSM (CMSSM), models with non-universal gaugino masses (NUGM) or higgs soft masses (NUHM1, NUHM2), the NMSSM and the ...

  12. Anomalous scaling in an age-dependent branching model

    OpenAIRE

    Keller-Schmidt, Stephanie; Tugrul, Murat; Eguiluz, Victor M.; Hernandez-Garcia, Emilio; Klemm, Konstantin

    2010-01-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age $\\tau$ as $\\tau^{-\\alpha}$. Depending on the exponent $\\alpha$, the scaling of tree depth with tree size $n$ displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition ($\\alpha=1$) tree depth grows as $(\\log n)^2$. This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus p...

  13. Scaling up watershed model parameters--Flow and load simulations of the Edisto River Basin

    Science.gov (United States)

    Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul

    2014-01-01

    . Because the focus of this investigation was on scaling up the models from McTier Creek, water-quality concentrations that were previously collected in the McTier Creek basin were used in the water-quality load models.

  14. Large transverse momentum processes in a non-scaling parton model

    International Nuclear Information System (INIS)

    Stirling, W.J.

    1977-01-01

    The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)

  15. Scaling for deuteron structure functions in a relativistic light-front model

    International Nuclear Information System (INIS)

    Polyzou, W.N.; Gloeckle, W.

    1996-01-01

    Scaling limits of the structure functions [B.D. Keister, Phys. Rev. C 37, 1765 (1988)], W 1 and W 2 , are studied in a relativistic model of the two-nucleon system. The relativistic model is defined by a unitary representation, U(Λ,a), of the Poincaracute e group which acts on the Hilbert space of two spinless nucleons. The representation is in Dirac close-quote s [P.A.M. Dirac, Rev. Mod. Phys. 21, 392 (1949)] light-front formulation of relativistic quantum mechanics and is designed to give the experimental deuteron mass and n-p scattering length. A model hadronic current operator that is conserved and covariant with respect to this representation is used to define the structure tensor. This work is the first step in a relativistic extension of the results of Hueber, Gloeckle, and Boemelburg. The nonrelativistic limit of the model is shown to be consistent with the nonrelativistic model of Hueber, Gloeckle, and Boemelburg. [D. Hueber et al. Phys. Rev. C 42, 2342 (1990)]. The relativistic and nonrelativistic scaling limits, for both Bjorken and y scaling are compared. The interpretation of y scaling in the relativistic model is studied critically. The standard interpretation of y scaling requires a soft wave function which is not realized in this model. The scaling limits in both the relativistic and nonrelativistic case are related to probability distributions associated with the target deuteron. copyright 1996 The American Physical Society

  16. Modelling deep water habitats to develop a spatially explicit, fine scale understanding of the distribution of the western rock lobster, Panulirus cygnus.

    Directory of Open Access Journals (Sweden)

    Renae K Hovey

    Full Text Available BACKGROUND: The western rock lobster, Panulirus cygnus, is endemic to Western Australia and supports substantial commercial and recreational fisheries. Due to and its wide distribution and the commercial and recreational importance of the species a key component of managing western rock lobster is understanding the ecological processes and interactions that may influence lobster abundance and distribution. Using terrain analyses and distribution models of substrate and benthic biota, we assess the physical drivers that influence the distribution of lobsters at a key fishery site. METHODS AND FINDINGS: Using data collected from hydroacoustic and towed video surveys, 20 variables (including geophysical, substrate and biota variables were developed to predict the distributions of substrate type (three classes of reef, rhodoliths and sand and dominant biota (kelp, sessile invertebrates and macroalgae within a 40 km(2 area about 30 km off the west Australian coast. Lobster presence/absence data were collected within this area using georeferenced pots. These datasets were used to develop a classification tree model for predicting the distribution of the western rock lobster. Interestingly, kelp and reef were not selected as predictors. Instead, the model selected geophysical and geomorphic scalar variables, which emphasise a mix of terrain within limited distances. The model of lobster presence had an adjusted D(2 of 64 and an 80% correct classification. CONCLUSIONS: Species distribution models indicate that juxtaposition in fine scale terrain is most important to the western rock lobster. While key features like kelp and reef may be important to lobster distribution at a broad scale, it is the fine scale features in terrain that are likely to define its ecological niche. Determining the most appropriate landscape configuration and scale will be essential to refining niche habitats and will aid in selecting appropriate sites for protecting critical

  17. Modelling Deep Water Habitats to Develop a Spatially Explicit, Fine Scale Understanding of the Distribution of the Western Rock Lobster, Panulirus cygnus

    Science.gov (United States)

    Hovey, Renae K.; Van Niel, Kimberly P.; Bellchambers, Lynda M.; Pember, Matthew B.

    2012-01-01

    Background The western rock lobster, Panulirus cygnus, is endemic to Western Australia and supports substantial commercial and recreational fisheries. Due to and its wide distribution and the commercial and recreational importance of the species a key component of managing western rock lobster is understanding the ecological processes and interactions that may influence lobster abundance and distribution. Using terrain analyses and distribution models of substrate and benthic biota, we assess the physical drivers that influence the distribution of lobsters at a key fishery site. Methods and Findings Using data collected from hydroacoustic and towed video surveys, 20 variables (including geophysical, substrate and biota variables) were developed to predict the distributions of substrate type (three classes of reef, rhodoliths and sand) and dominant biota (kelp, sessile invertebrates and macroalgae) within a 40 km2 area about 30 km off the west Australian coast. Lobster presence/absence data were collected within this area using georeferenced pots. These datasets were used to develop a classification tree model for predicting the distribution of the western rock lobster. Interestingly, kelp and reef were not selected as predictors. Instead, the model selected geophysical and geomorphic scalar variables, which emphasise a mix of terrain within limited distances. The model of lobster presence had an adjusted D2 of 64 and an 80% correct classification. Conclusions Species distribution models indicate that juxtaposition in fine scale terrain is most important to the western rock lobster. While key features like kelp and reef may be important to lobster distribution at a broad scale, it is the fine scale features in terrain that are likely to define its ecological niche. Determining the most appropriate landscape configuration and scale will be essential to refining niche habitats and will aid in selecting appropriate sites for protecting critical lobster habitats. PMID

  18. Modelling deep water habitats to develop a spatially explicit, fine scale understanding of the distribution of the western rock lobster, Panulirus cygnus.

    Science.gov (United States)

    Hovey, Renae K; Van Niel, Kimberly P; Bellchambers, Lynda M; Pember, Matthew B

    2012-01-01

    The western rock lobster, Panulirus cygnus, is endemic to Western Australia and supports substantial commercial and recreational fisheries. Due to and its wide distribution and the commercial and recreational importance of the species a key component of managing western rock lobster is understanding the ecological processes and interactions that may influence lobster abundance and distribution. Using terrain analyses and distribution models of substrate and benthic biota, we assess the physical drivers that influence the distribution of lobsters at a key fishery site. Using data collected from hydroacoustic and towed video surveys, 20 variables (including geophysical, substrate and biota variables) were developed to predict the distributions of substrate type (three classes of reef, rhodoliths and sand) and dominant biota (kelp, sessile invertebrates and macroalgae) within a 40 km(2) area about 30 km off the west Australian coast. Lobster presence/absence data were collected within this area using georeferenced pots. These datasets were used to develop a classification tree model for predicting the distribution of the western rock lobster. Interestingly, kelp and reef were not selected as predictors. Instead, the model selected geophysical and geomorphic scalar variables, which emphasise a mix of terrain within limited distances. The model of lobster presence had an adjusted D(2) of 64 and an 80% correct classification. Species distribution models indicate that juxtaposition in fine scale terrain is most important to the western rock lobster. While key features like kelp and reef may be important to lobster distribution at a broad scale, it is the fine scale features in terrain that are likely to define its ecological niche. Determining the most appropriate landscape configuration and scale will be essential to refining niche habitats and will aid in selecting appropriate sites for protecting critical lobster habitats.

  19. Approximate symmetries in atomic nuclei from a large-scale shell-model perspective

    Science.gov (United States)

    Launey, K. D.; Draayer, J. P.; Dytrych, T.; Sun, G.-H.; Dong, S.-H.

    2015-05-01

    In this paper, we review recent developments that aim to achieve further understanding of the structure of atomic nuclei, by capitalizing on exact symmetries as well as approximate symmetries found to dominate low-lying nuclear states. The findings confirm the essential role played by the Sp(3, ℝ) symplectic symmetry to inform the interaction and the relevant model spaces in nuclear modeling. The significance of the Sp(3, ℝ) symmetry for a description of a quantum system of strongly interacting particles naturally emerges from the physical relevance of its generators, which directly relate to particle momentum and position coordinates, and represent important observables, such as, the many-particle kinetic energy, the monopole operator, the quadrupole moment and the angular momentum. We show that it is imperative that shell-model spaces be expanded well beyond the current limits to accommodate particle excitations that appear critical to enhanced collectivity in heavier systems and to highly-deformed spatial structures, exemplified by the second 0+ state in 12C (the challenging Hoyle state) and 8Be. While such states are presently inaccessible by large-scale no-core shell models, symmetry-based considerations are found to be essential.

  20. Toward micro-scale spatial modeling of gentrification

    Science.gov (United States)

    O'Sullivan, David

    A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.

  1. Atomic scale modelling of materials of the nuclear fuel cycle

    International Nuclear Information System (INIS)

    Bertolus, M.

    2011-10-01

    This document written to obtain the French accreditation to supervise research presents the research I conducted at CEA Cadarache since 1999 on the atomic scale modelling of non-metallic materials involved in the nuclear fuel cycle: host materials for radionuclides from nuclear waste (apatites), fuel (in particular uranium dioxide) and ceramic cladding materials (silicon carbide). These are complex materials at the frontier of modelling capabilities since they contain heavy elements (rare earths or actinides), exhibit complex structures or chemical compositions and/or are subjected to irradiation effects: creation of point defects and fission products, amorphization. The objective of my studies is to bring further insight into the physics and chemistry of the elementary processes involved using atomic scale modelling and its coupling with higher scale models and experimental studies. This work is organised in two parts: on the one hand the development, adaptation and implementation of atomic scale modelling methods and validation of the approximations used; on the other hand the application of these methods to the investigation of nuclear materials under irradiation. This document contains a synthesis of the studies performed, orientations for future research, a detailed resume and a list of publications and communications. (author)

  2. Transport simulations TFTR: Theoretically-based transport models and current scaling

    International Nuclear Information System (INIS)

    Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.

    1991-12-01

    In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments

  3. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  4. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    Science.gov (United States)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  5. An Updated Site Scale Saturated Zone Ground Water Transport Model For Yucca Mountain

    International Nuclear Information System (INIS)

    S. Kelkar; H. Viswanathan; A. Eddebbarrh; M. Ding; P. Reimus; B. Robinson; B. Arnold; A. Meijer

    2006-01-01

    The Yucca Mountain site scale saturated zone transport model has been revised to incorporate the updated flow model based on a hydrogeologic framework model using the latest lithology data, increased grid resolution that better resolves the geology within the model domain, updated Kd distributions for radionuclides of interest, and updated retardation factor distributions for colloid filtration. The resulting numerical transport model is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. The transport model results are validated by comparing the model transport pathways with those derived from geochemical data, and by comparing the transit times from the repository footprint to the compliance boundary at the accessible environment with those derived from 14 C-based age estimates. The transport model includes the processes of advection, dispersion, fracture flow, matrix diffusion, sorption, and colloid-facilitated transport. The transport of sorbing radionuclides in the aqueous phase is modeled as a linear, equilibrium process using the Kd model. The colloid-facilitated transport of radionuclides is modeled using two approaches: the colloids with irreversibly embedded radionuclides undergo reversible filtration only, while the migration of radionuclides that reversibly sorb to colloids is modeled with modified values for sorption coefficient and matrix diffusion coefficients. Model breakthrough curves for various radionuclides at the compliance boundary are presented along with their sensitivity to various parameters

  6. Extension of landscape-based population viability models to ecoregional scales for conservation planning

    Science.gov (United States)

    Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh

    2011-01-01

    Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...

  7. Collecting and analyzing data in multidimensional scaling experiments: A guide for psychologists using SPSS

    Directory of Open Access Journals (Sweden)

    Gyslain Giguère

    2006-03-01

    Full Text Available This paper aims at providing a quick and simple guide to using a multidimensional scaling procedure to analyze experimental data. First, the operations of data collection and preparation are described. Next, instructions for data analysis using the ALSCAL procedure (Takane, Young and DeLeeuw, 1977, found in SPSS, are detailed. Overall, a description of useful commands, measures and graphs is provided. Emphasis is made on experimental designs and program use, rather than the description of techniques in an algebraic or geometrical fashion.

  8. RESOLVING NEIGHBORHOOD-SCALE AIR TOXICS MODELING: A CASE STUDY IN WILMINGTON, CALIFORNIA

    Science.gov (United States)

    Air quality modeling is useful for characterizing exposures to air pollutants. While models typically provide results on regional scales, there is a need for refined modeling approaches capable of resolving concentrations on the scale of tens of meters, across modeling domains 1...

  9. Multi Scale Models for Flexure Deformation in Sheet Metal Forming

    Directory of Open Access Journals (Sweden)

    Di Pasquale Edmondo

    2016-01-01

    Full Text Available This paper presents the application of multi scale techniques to the simulation of sheet metal forming using the one-step method. When a blank flows over the die radius, it undergoes a complex cycle of bending and unbending. First, we describe an original model for the prediction of residual plastic deformation and stresses in the blank section. This model, working on a scale about one hundred times smaller than the element size, has been implemented in SIMEX, one-step sheet metal forming simulation code. The utilisation of this multi-scale modeling technique improves greatly the accuracy of the solution. Finally, we discuss the implications of this analysis on the prediction of springback in metal forming.

  10. Modeling sediment yield in small catchments at event scale: Model comparison, development and evaluation

    Science.gov (United States)

    Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.

    2017-12-01

    Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.

  11. Genome scale metabolic modeling of cancer

    DEFF Research Database (Denmark)

    Nilsson, Avlant; Nielsen, Jens

    2017-01-01

    of metabolism which allows simulation and hypotheses testing of metabolic strategies. It has successfully been applied to many microorganisms and is now used to study cancer metabolism. Generic models of human metabolism have been reconstructed based on the existence of metabolic genes in the human genome......Cancer cells reprogram metabolism to support rapid proliferation and survival. Energy metabolism is particularly important for growth and genes encoding enzymes involved in energy metabolism are frequently altered in cancer cells. A genome scale metabolic model (GEM) is a mathematical formalization...

  12. European Continental Scale Hydrological Model, Limitations and Challenges

    Science.gov (United States)

    Rouholahnejad, E.; Abbaspour, K.

    2014-12-01

    The pressures on water resources due to increasing levels of societal demand, increasing conflict of interest and uncertainties with regard to freshwater availability create challenges for water managers and policymakers in many parts of Europe. At the same time, climate change adds a new level of pressure and uncertainty with regard to freshwater supplies. On the other hand, the small-scale sectoral structure of water management is now reaching its limits. The integrated management of water in basins requires a new level of consideration where water bodies are to be viewed in the context of the whole river system and managed as a unit within their basins. In this research we present the limitations and challenges of modelling the hydrology of the continent Europe. The challenges include: data availability at continental scale and the use of globally available data, streamgauge data quality and their misleading impacts on model calibration, calibration of large-scale distributed model, uncertainty quantification, and computation time. We describe how to avoid over parameterization in calibration process and introduce a parallel processing scheme to overcome high computation time. We used Soil and Water Assessment Tool (SWAT) program as an integrated hydrology and crop growth simulator to model water resources of the Europe continent. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals for the period of 1970-2006. The use of a large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation and provides the overall picture of water resources temporal and spatial distribution across the continent. The calibrated model and results provide information support to the European Water

  13. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  14. A collective model for transitional nuclei

    International Nuclear Information System (INIS)

    Bernus, L. von; Kappatsch, A.; Rezwani, V.; Scheid, W.; Schneider, U.; Sedlmayr, M.; Sedlmayr, R.

    1975-01-01

    The paper consists of the following sections: 1. Introduction; 2. The model (The quadrupole co-ordinates, the potential energy surface, the Hamilton operator, quadrupole moments, B(E2)-values and rms-radii); 3. The diagonalization of the collective Hamilton operator (The eigen-states of the five-dimensional oscillator, classification of the basis: R(5) is contained in R(3) and R(5) is contained in R(4) = SU(2) x SU(2), calculation of the matrix elements of H, convergence of the numerical procedure); 4. Application of the model (General remarks, typical spectra, selected spectra, conclusions); 5. The coupling of the giant-resonance states with the low-energy spectrum (The Hamilton operator, hydrodynamical model for the GR, the interaction Hamilton operator Hsub(DQ), the basis states for diagonalization, the dipole operator and the γ-absorption cross-section, results); 6. Summary. (author)

  15. Advanced modeling to accelerate the scale up of carbon capture technologies

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C.; Sun, XIN; Storlie, Curtis B.; Bhattacharyya, Debangsu

    2015-06-01

    In order to help meet the goals of the DOE carbon capture program, the Carbon Capture Simulation Initiative (CCSI) was launched in early 2011 to develop, demonstrate, and deploy advanced computational tools and validated multi-scale models to reduce the time required to develop and scale-up new carbon capture technologies. This article focuses on essential elements related to the development and validation of multi-scale models in order to help minimize risk and maximize learning as new technologies progress from pilot to demonstration scale.

  16. Scale modeling of reinforced concrete structures subjected to seismic loading

    International Nuclear Information System (INIS)

    Dove, R.C.

    1983-01-01

    Reinforced concrete, Category I structures are so large that the possibility of seismicly testing the prototype structures under controlled conditions is essentially nonexistent. However, experimental data, from which important structural properties can be determined and existing and new methods of seismic analysis benchmarked, are badly needed. As a result, seismic experiments on scaled models are of considerable interest. In this paper, the scaling laws are developed in some detail so that assumptions and choices based on judgement can be clearly recognized and their effects discussed. The scaling laws developed are then used to design a reinforced concrete model of a Category I structure. Finally, how scaling is effected by various types of damping (viscous, structural, and Coulomb) is discussed

  17. The Cell Collective: Toward an open and collaborative approach to systems biology

    Directory of Open Access Journals (Sweden)

    Helikar Tomáš

    2012-08-01

    Full Text Available Abstract Background Despite decades of new discoveries in biomedical research, the overwhelming complexity of cells has been a significant barrier to a fundamental understanding of how cells work as a whole. As such, the holistic study of biochemical pathways requires computer modeling. Due to the complexity of cells, it is not feasible for one person or group to model the cell in its entirety. Results The Cell Collective is a platform that allows the world-wide scientific community to create these models collectively. Its interface enables users to build and use models without specifying any mathematical equations or computer code - addressing one of the major hurdles with computational research. In addition, this platform allows scientists to simulate and analyze the models in real-time on the web, including the ability to simulate loss/gain of function and test what-if scenarios in real time. Conclusions The Cell Collective is a web-based platform that enables laboratory scientists from across the globe to collaboratively build large-scale models of various biological processes, and simulate/analyze them in real time. In this manuscript, we show examples of its application to a large-scale model of signal transduction.

  18. Models for wind turbines - a collection

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, M.H. (eds.); Baumgart, A.

    2002-02-01

    This report is a collection of notes which were intended to be short communications. Main target of the work presented is to supply new approaches to stability investigations of wind turbines. The authors opinion is that an efficient, systematic stability analysis can not be performed for large systems of differential equations (i.e. the order of the differential equations > 100), because numerical 'effects' in the solution of the equations of motion as initial value problem, eigenvalue problem or whatsoever become predominant. It is therefore necessary to find models which are reduced to the elementary coordinates but which can still describe the physical processes under consideration with sufficiently good accuracy. Such models are presented. (au)

  19. Perceived Discrimination and Subjective Well-being in Chinese Migrant Adolescents: Collective and Personal Self-esteem As Mediators

    OpenAIRE

    Jia, Xuji; Liu, Xia; Shi, Baoguo

    2017-01-01

    This study aimed to examine whether collective and personal self-esteem serve as mediators in the relationship between perceived discrimination and subjective well-being among Chinese rural-to-urban migrant adolescents. Six hundred and ninety-two adolescents completed a perceived discrimination scale, a collective self-esteem scale, a personal self-esteem scale, and a subjective well-being scale. Structural equation modeling was used to test the mediation hypothesis. The analysis indicated th...

  20. Field theory of large amplitude collective motion. A schematic model

    International Nuclear Information System (INIS)

    Reinhardt, H.

    1978-01-01

    By using path integral methods the equation for large amplitude collective motion for a schematic two-level model is derived. The original fermion theory is reformulated in terms of a collective (Bose) field. The classical equation of motion for the collective field coincides with the time-dependent Hartree-Fock equation. Its classical solution is quantized by means of the field-theoretical generalization of the WKB method. (author)

  1. A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media

    KAUST Repository

    Chen, Huangxin

    2016-06-01

    In this paper, we develop a two-scale reduced model for simulating the Darcy flow in two-dimensional porous media with conductive fractures. We apply the approach motivated by the embedded fracture model (EFM) to simulate the flow on the coarse scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved on unstructured grid which represents the fractures accurately, while in the EFM used on the coarse scale, the flux interaction between fractures and matrix are dealt with as a source term, and the matrix-fracture system can be resolved on structured grid. The Raviart-Thomas mixed finite element methods are used for the solution of the coupled flows in the matrix and the fractures on both fine and coarse scales. Numerical results are presented to demonstrate the efficiency of the proposed model for simulation of flow in fractured porous media.

  2. Fractionaly Integrated Flux model and Scaling Laws in Weather and Climate

    Science.gov (United States)

    Schertzer, Daniel; Lovejoy, Shaun

    2013-04-01

    The Fractionaly Integrated Flux model (FIF) has been extensively used to model intermittent observables, like the velocity field, by defining them with the help of a fractional integration of a conservative (i.e. strictly scale invariant) flux, such as the turbulent energy flux. It indeed corresponds to a well-defined modelling that yields the observed scaling laws. Generalised Scale Invariance (GSI) enables FIF to deal with anisotropic fractional integrations and has been rather successful to define and model a unique regime of scaling anisotropic turbulence up to planetary scales. This turbulence has an effective dimension of 23/9=2.55... instead of the classical hypothesised 2D and 3D turbulent regimes, respectively for large and small spatial scales. It therefore theoretically eliminates a non plausible "dimension transition" between these two regimes and the resulting requirement of a turbulent energy "mesoscale gap", whose empirical evidence has been brought more and more into question. More recently, GSI-FIF was used to analyse climate, therefore at much larger time scales. Indeed, the 23/9-dimensional regime necessarily breaks up at the outer spatial scales. The corresponding transition range, which can be called "macroweather", seems to have many interesting properties, e.g. it rather corresponds to a fractional differentiation in time with a roughly flat frequency spectrum. Furthermore, this transition yields the possibility to have at much larger time scales scaling space-time climate fluctuations with a much stronger scaling anisotropy between time and space. Lovejoy, S. and D. Schertzer (2013). The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge Press (in press). Schertzer, D. et al. (1997). Fractals 5(3): 427-471. Schertzer, D. and S. Lovejoy (2011). International Journal of Bifurcation and Chaos 21(12): 3417-3456.

  3. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  4. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  5. Wind Farm Wake Models From Full Scale Data

    DEFF Research Database (Denmark)

    Knudsen, Torben; Bak, Thomas

    2012-01-01

    This investigation is part of the EU FP7 project “Distributed Control of Large-Scale Offshore Wind Farms”. The overall goal in this project is to develop wind farm controllers giving power set points to individual turbines in the farm in order to minimise mechanical loads and optimise power. One...... on real full scale data. The modelling is based on so called effective wind speed. It is shown that there is a wake for a wind direction range of up to 20 degrees. Further, when accounting for the wind direction it is shown that the two model structures considered can both fit the experimental data...

  6. Preparing the Model for Prediction Across Scales (MPAS) for global retrospective air quality modeling

    Science.gov (United States)

    The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...

  7. Coarse-graining to the meso and continuum scales with molecular-dynamics-like models

    Science.gov (United States)

    Plimpton, Steve

    Many engineering-scale problems that industry or the national labs try to address with particle-based simulations occur at length and time scales well beyond the most optimistic hopes of traditional coarse-graining methods for molecular dynamics (MD), which typically start at the atomic scale and build upward. However classical MD can be viewed as an engine for simulating particles at literally any length or time scale, depending on the models used for individual particles and their interactions. To illustrate I'll highlight several coarse-grained (CG) materials models, some of which are likely familiar to molecular-scale modelers, but others probably not. These include models for water droplet freezing on surfaces, dissipative particle dynamics (DPD) models of explosives where particles have internal state, CG models of nano or colloidal particles in solution, models for aspherical particles, Peridynamics models for fracture, and models of granular materials at the scale of industrial processing. All of these can be implemented as MD-style models for either soft or hard materials; in fact they are all part of our LAMMPS MD package, added either by our group or contributed by collaborators. Unlike most all-atom MD simulations, CG simulations at these scales often involve highly non-uniform particle densities. So I'll also discuss a load-balancing method we've implemented for these kinds of models, which can improve parallel efficiencies. From the physics point-of-view, these models may be viewed as non-traditional or ad hoc. But because they are MD-style simulations, there's an opportunity for physicists to add statistical mechanics rigor to individual models. Or, in keeping with a theme of this session, to devise methods that more accurately bridge models from one scale to the next.

  8. Use of genome-scale microbial models for metabolic engineering

    DEFF Research Database (Denmark)

    Patil, Kiran Raosaheb; Åkesson, M.; Nielsen, Jens

    2004-01-01

    Metabolic engineering serves as an integrated approach to design new cell factories by providing rational design procedures and valuable mathematical and experimental tools. Mathematical models have an important role for phenotypic analysis, but can also be used for the design of optimal metaboli...... network structures. The major challenge for metabolic engineering in the post-genomic era is to broaden its design methodologies to incorporate genome-scale biological data. Genome-scale stoichiometric models of microorganisms represent a first step in this direction....

  9. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  10. Scaling-up spatially-explicit ecological models using graphics processors

    NARCIS (Netherlands)

    Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis

    2011-01-01

    How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to

  11. Preparatory hydrogeological calculations for site scale models of Aberg, Beberg and Ceberg

    International Nuclear Information System (INIS)

    Gylling, B.; Lindgren, M.; Widen, H.

    1999-03-01

    The purpose of the study is to evaluate the basis for site scale models of the three sites Aberg, Beberg and Ceberg in terms of: extent and position of site scale model domains; numerical implementation of geologic structural model; systematic review of structural data and control of compatibility in data sets. Some of the hydrogeological features of each site are briefly described. A summary of the results from the regional modelling exercises for Aberg, Beberg and Ceberg is given. The results from the regional models may be used as a base for determining the location and size of the site scale models and provide such models with boundary conditions. Results from the regional models may also indicate suitable locations for repositories. The resulting locations and sizes for site scale models are presented in figures. There are also figures showing that the structural models interpreted by HYDRASTAR do not conflict with the repository tunnels. It has in addition been verified with TRAZON, a modified version of HYDRASTAR for checking starting positions, revealing conflicts between starting positions and fractures zones if present

  12. Modeling Lactococcus lactis using a genome-scale flux model

    Directory of Open Access Journals (Sweden)

    Nielsen Jens

    2005-06-01

    Full Text Available Abstract Background Genome-scale flux models are useful tools to represent and analyze microbial metabolism. In this work we reconstructed the metabolic network of the lactic acid bacteria Lactococcus lactis and developed a genome-scale flux model able to simulate and analyze network capabilities and whole-cell function under aerobic and anaerobic continuous cultures. Flux balance analysis (FBA and minimization of metabolic adjustment (MOMA were used as modeling frameworks. Results The metabolic network was reconstructed using the annotated genome sequence from L. lactis ssp. lactis IL1403 together with physiological and biochemical information. The established network comprised a total of 621 reactions and 509 metabolites, representing the overall metabolism of L. lactis. Experimental data reported in the literature was used to fit the model to phenotypic observations. Regulatory constraints had to be included to simulate certain metabolic features, such as the shift from homo to heterolactic fermentation. A minimal medium for in silico growth was identified, indicating the requirement of four amino acids in addition to a sugar. Remarkably, de novo biosynthesis of four other amino acids was observed even when all amino acids were supplied, which is in good agreement with experimental observations. Additionally, enhanced metabolic engineering strategies for improved diacetyl producing strains were designed. Conclusion The L. lactis metabolic network can now be used for a better understanding of lactococcal metabolic capabilities and potential, for the design of enhanced metabolic engineering strategies and for integration with other types of 'omic' data, to assist in finding new information on cellular organization and function.

  13. Evaluation of a Genome-Scale In Silico Metabolic Model for Geobacter metallireducens by Using Proteomic Data from a Field Biostimulation Experiment

    Science.gov (United States)

    Fang, Yilin; Yabusaki, Steven B.; Lipton, Mary S.; Long, Philip E.

    2012-01-01

    Accurately predicting the interactions between microbial metabolism and the physical subsurface environment is necessary to enhance subsurface energy development, soil and groundwater cleanup, and carbon management. This study was an initial attempt to confirm the metabolic functional roles within an in silico model using environmental proteomic data collected during field experiments. Shotgun global proteomics data collected during a subsurface biostimulation experiment were used to validate a genome-scale metabolic model of Geobacter metallireducens—specifically, the ability of the metabolic model to predict metal reduction, biomass yield, and growth rate under dynamic field conditions. The constraint-based in silico model of G. metallireducens relates an annotated genome sequence to the physiological functions with 697 reactions controlled by 747 enzyme-coding genes. Proteomic analysis showed that 180 of the 637 G. metallireducens proteins detected during the 2008 experiment were associated with specific metabolic reactions in the in silico model. When the field-calibrated Fe(III) terminal electron acceptor process reaction in a reactive transport model for the field experiments was replaced with the genome-scale model, the model predicted that the largest metabolic fluxes through the in silico model reactions generally correspond to the highest abundances of proteins that catalyze those reactions. Central metabolism predicted by the model agrees well with protein abundance profiles inferred from proteomic analysis. Model discrepancies with the proteomic data, such as the relatively low abundances of proteins associated with amino acid transport and metabolism, revealed pathways or flux constraints in the in silico model that could be updated to more accurately predict metabolic processes that occur in the subsurface environment. PMID:23042184

  14. Wildland Fire Behaviour Case Studies and Fuel Models for Landscape-Scale Fire Modeling

    Directory of Open Access Journals (Sweden)

    Paul-Antoine Santoni

    2011-01-01

    Full Text Available This work presents the extension of a physical model for the spreading of surface fire at landscape scale. In previous work, the model was validated at laboratory scale for fire spreading across litters. The model was then modified to consider the structure of actual vegetation and was included in the wildland fire calculation system Forefire that allows converting the two-dimensional model of fire spread to three dimensions, taking into account spatial information. Two wildland fire behavior case studies were elaborated and used as a basis to test the simulator. Both fires were reconstructed, paying attention to the vegetation mapping, fire history, and meteorological data. The local calibration of the simulator required the development of appropriate fuel models for shrubland vegetation (maquis for use with the model of fire spread. This study showed the capabilities of the simulator during the typical drought season characterizing the Mediterranean climate when most wildfires occur.

  15. Atmospheric dispersion modelling over complex terrain at small scale

    Science.gov (United States)

    Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.

    2014-03-01

    Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.

  16. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  17. Description of Muzzle Blast by Modified Ideal Scaling Models

    Directory of Open Access Journals (Sweden)

    Kevin S. Fansler

    1998-01-01

    Full Text Available Gun blast data from a large variety of weapons are scaled and presented for both the instantaneous energy release and the constant energy deposition rate models. For both ideal explosion models, similar amounts of data scatter occur for the peak overpressure but the instantaneous energy release model correlated the impulse data significantly better, particularly for the region in front of the gun. Two parameters that characterize gun blast are used in conjunction with the ideal scaling models to improve the data correlation. The gun-emptying parameter works particularly well with the instantaneous energy release model to improve data correlation. In particular, the impulse, especially in the forward direction of the gun, is correlated significantly better using the instantaneous energy release model coupled with the use of the gun-emptying parameter. The use of the Mach disc location parameter improves the correlation only marginally. A predictive model is obtained from the modified instantaneous energy release correlation.

  18. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  19. Anomalous Scaling Behaviors in a Rice-Pile Model with Two Different Driving Mechanisms

    International Nuclear Information System (INIS)

    Zhang Duanming; Sun Hongzhang; Li Zhihua; Pan Guijun; Yu Boming; Li Rui; Yin Yanping

    2005-01-01

    The moment analysis is applied to perform large scale simulations of the rice-pile model. We find that this model shows different scaling behavior depending on the driving mechanism used. With the noisy driving, the rice-pile model violates the finite-size scaling hypothesis, whereas, with fixed driving, it shows well defined avalanche exponents and displays good finite size scaling behavior for the avalanche size and time duration distributions.

  20. Hydrogeologic Framework Model for the Saturated Zone Site Scale flow and Transport Model

    Energy Technology Data Exchange (ETDEWEB)

    T. Miller

    2004-11-15

    The purpose of this report is to document the 19-unit, hydrogeologic framework model (19-layer version, output of this report) (HFM-19) with regard to input data, modeling methods, assumptions, uncertainties, limitations, and validation of the model results in accordance with AP-SIII.10Q, Models. The HFM-19 is developed as a conceptual model of the geometric extent of the hydrogeologic units at Yucca Mountain and is intended specifically for use in the development of the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]). Primary inputs to this model report include the GFM 3.1 (DTN: MO9901MWDGFM31.000 [DIRS 103769]), borehole lithologic logs, geologic maps, geologic cross sections, water level data, topographic information, and geophysical data as discussed in Section 4.1. Figure 1-1 shows the information flow among all of the saturated zone (SZ) reports and the relationship of this conceptual model in that flow. The HFM-19 is a three-dimensional (3-D) representation of the hydrogeologic units surrounding the location of the Yucca Mountain geologic repository for spent nuclear fuel and high-level radioactive waste. The HFM-19 represents the hydrogeologic setting for the Yucca Mountain area that covers about 1,350 km2 and includes a saturated thickness of about 2.75 km. The boundaries of the conceptual model were primarily chosen to be coincident with grid cells in the Death Valley regional groundwater flow model (DTN: GS960808312144.003 [DIRS 105121]) such that the base of the site-scale SZ flow model is consistent with the base of the regional model (2,750 meters below a smoothed version of the potentiometric surface), encompasses the exploratory boreholes, and provides a framework over the area of interest for groundwater flow and radionuclide transport modeling. In depth, the model domain extends from land surface to the base of the regional groundwater flow model (D'Agnese et al. 1997 [DIRS 100131], p 2). For the site-scale

  1. Hydrogeologic Framework Model for the Saturated Zone Site Scale flow and Transport Model

    International Nuclear Information System (INIS)

    Miller, T.

    2004-01-01

    The purpose of this report is to document the 19-unit, hydrogeologic framework model (19-layer version, output of this report) (HFM-19) with regard to input data, modeling methods, assumptions, uncertainties, limitations, and validation of the model results in accordance with AP-SIII.10Q, Models. The HFM-19 is developed as a conceptual model of the geometric extent of the hydrogeologic units at Yucca Mountain and is intended specifically for use in the development of the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]). Primary inputs to this model report include the GFM 3.1 (DTN: MO9901MWDGFM31.000 [DIRS 103769]), borehole lithologic logs, geologic maps, geologic cross sections, water level data, topographic information, and geophysical data as discussed in Section 4.1. Figure 1-1 shows the information flow among all of the saturated zone (SZ) reports and the relationship of this conceptual model in that flow. The HFM-19 is a three-dimensional (3-D) representation of the hydrogeologic units surrounding the location of the Yucca Mountain geologic repository for spent nuclear fuel and high-level radioactive waste. The HFM-19 represents the hydrogeologic setting for the Yucca Mountain area that covers about 1,350 km2 and includes a saturated thickness of about 2.75 km. The boundaries of the conceptual model were primarily chosen to be coincident with grid cells in the Death Valley regional groundwater flow model (DTN: GS960808312144.003 [DIRS 105121]) such that the base of the site-scale SZ flow model is consistent with the base of the regional model (2,750 meters below a smoothed version of the potentiometric surface), encompasses the exploratory boreholes, and provides a framework over the area of interest for groundwater flow and radionuclide transport modeling. In depth, the model domain extends from land surface to the base of the regional groundwater flow model (D'Agnese et al. 1997 [DIRS 100131], p 2). For the site-scale SZ flow model, the HFM

  2. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  3. Modeling and simulation of the SDC data collection chip

    International Nuclear Information System (INIS)

    Hughes, E.; Haney, M.; Golin, E.; Jones, L.; Knapp, D.; Tharakan, G.; Downing, R.

    1992-01-01

    This paper describes modeling and simulation of the Data Collection Chip (DCC) design for the Solenoidal Detector Collaboration (SDC). Models of the DCC written in Verilog and VHDL are described, and results are presented. The models have been simulated to study queue depth requirements and to compare control feedback alternatives. Insight into the management of models and simulation tools is given. Finally, techniques useful in the design process for data acquisition systems are discussed

  4. Anomalous scaling in an age-dependent branching model.

    Science.gov (United States)

    Keller-Schmidt, Stephanie; Tuğrul, Murat; Eguíluz, Víctor M; Hernández-García, Emilio; Klemm, Konstantin

    2015-02-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age τ as τ(-α). Depending on the exponent α, the scaling of tree depth with tree size n displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition (α=1) tree depth grows as (logn)(2). This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus providing a theoretical support for age-dependent speciation and associating it to the occurrence of a critical point.

  5. Air scaling and modeling studies for the 1/5-scale mark I boiling water reactor pressure suppression experiment

    Energy Technology Data Exchange (ETDEWEB)

    Lai, W.; McCauley, E.W.

    1978-01-04

    Results of table-top model experiments performed to investigate pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peach Bottom Mark I boiling water reactor containment system guided subsequent conduct of the 1/5-scale torus experiment and provided new insight into the vertical load function (VLF). Pool dynamics results were qualitatively correct. Experiments with a 1/64-scale fully modeled drywell and torus showed that a 90/sup 0/ torus sector was adequate to reveal three-dimensional effects; the 1/5-scale torus experiment confirmed this.

  6. Air scaling and modeling studies for the 1/5-scale mark I boiling water reactor pressure suppression experiment

    International Nuclear Information System (INIS)

    Lai, W.; McCauley, E.W.

    1978-01-01

    Results of table-top model experiments performed to investigate pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peach Bottom Mark I boiling water reactor containment system guided subsequent conduct of the 1/5-scale torus experiment and provided new insight into the vertical load function (VLF). Pool dynamics results were qualitatively correct. Experiments with a 1/64-scale fully modeled drywell and torus showed that a 90 0 torus sector was adequate to reveal three-dimensional effects; the 1/5-scale torus experiment confirmed this

  7. Doubly stochastic Poisson process models for precipitation at fine time-scales

    Science.gov (United States)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  8. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    Science.gov (United States)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  9. Modeling collective emotions: a stochastic approach based on Brownian agents

    International Nuclear Information System (INIS)

    Schweitzer, F.

    2010-01-01

    We develop a agent-based framework to model the emergence of collective emotions, which is applied to online communities. Agents individual emotions are described by their valence and arousal. Using the concept of Brownian agents, these variables change according to a stochastic dynamics, which also considers the feedback from online communication. Agents generate emotional information, which is stored and distributed in a field modeling the online medium. This field affects the emotional states of agents in a non-linear manner. We derive conditions for the emergence of collective emotions, observable in a bimodal valence distribution. Dependent on a saturated or a super linear feedback between the information field and the agent's arousal, we further identify scenarios where collective emotions only appear once or in a repeated manner. The analytical results are illustrated by agent-based computer simulations. Our framework provides testable hypotheses about the emergence of collective emotions, which can be verified by data from online communities. (author)

  10. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  11. A pragmatic approach to modelling soil and water conservation measures with a cathment scale erosion model.

    NARCIS (Netherlands)

    Hessel, R.; Tenge, A.J.M.

    2008-01-01

    To reduce soil erosion, soil and water conservation (SWC) methods are often used. However, no method exists to model beforehand how implementing such measures will affect erosion at catchment scale. A method was developed to simulate the effects of SWC measures with catchment scale erosion models.

  12. Upscaling of U (VI) desorption and transport from decimeter‐scale heterogeneity to plume‐scale modeling

    Science.gov (United States)

    Curtis, Gary P.; Kohler, Matthias; Kannappan, Ramakrishnan; Briggs, Martin A.; Day-Lewis, Frederick D.

    2015-01-01

    Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research were to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.

  13. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...

  14. Autonomous Sensors for Large Scale Data Collection

    Science.gov (United States)

    Noto, J.; Kerr, R.; Riccobono, J.; Kapali, S.; Migliozzi, M. A.; Goenka, C.

    2017-12-01

    Presented here is a novel implementation of a "Doppler imager" which remotely measures winds and temperatures of the neutral background atmosphere at ionospheric altitudes of 87-300Km and possibly above. Incorporating both recent optical manufacturing developments, modern network awareness and the application of machine learning techniques for intelligent self-monitoring and data classification. This system achieves cost savings in manufacturing, deployment and lifetime operating costs. Deployed in both ground and space-based modalities, this cost-disruptive technology will allow computer models of, ionospheric variability and other space weather models to operate with higher precision. Other sensors can be folded into the data collection and analysis architecture easily creating autonomous virtual observatories. A prototype version of this sensor has recently been deployed in Trivandrum India for the Indian Government. This Doppler imager is capable of operation, even within the restricted CubeSat environment. The CubeSat bus offers a very challenging environment, even for small instruments. The lack of SWaP and the challenging thermal environment demand development of a new generation of instruments; the Doppler imager presented is well suited to this environment. Concurrent with this CubeSat development is the development and construction of ground based arrays of inexpensive sensors using the proposed technology. This instrument could be flown inexpensively on one or more CubeSats to provide valuable data to space weather forecasters and ionospheric scientists. Arrays of magnetometers have been deployed for the last 20 years [Alabi, 2005]. Other examples of ground based arrays include an array of white-light all sky imagers (THEMIS) deployed across Canada [Donovan et al., 2006], oceans sensors on buoys [McPhaden et al., 2010], and arrays of seismic sensors [Schweitzer et al., 2002]. A comparable array of Doppler imagers can be constructed and deployed on the

  15. Confirmatory Factor Analysis of the Combined Social Phobia Scale and Social Interaction Anxiety Scale: Support for a Bifactor Model

    OpenAIRE

    Gomez, Rapson; Watson, Shaun D.

    2017-01-01

    For the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) together, this study examined support for a bifactor model, and also the internal consistency reliability and external validity of the factors in this model. Participants (N = 526) were adults from the general community who completed the SPS and SIAS. Confirmatory factor analysis (CFA) of their ratings indicated good support for the bifactor model. For this model, the loadings for all but six items were higher o...

  16. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  17. Measurement and modeling of two-phase flow parameters in scaled 8 Multiplication-Sign 8 BWR rod bundle

    Energy Technology Data Exchange (ETDEWEB)

    Yang, X.; Schlegel, J.P.; Liu, Y.; Paranjape, S.; Hibiki, T. [School of Nuclear Engineering, Purdue University, 400 Central Dr., West Lafayette, IN 47907-2017 (United States); Ishii, M., E-mail: ishii@purdue.edu [School of Nuclear Engineering, Purdue University, 400 Central Dr., West Lafayette, IN 47907-2017 (United States)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Grid spacers have a significant but not well understood effect on flow behavior and development. Black-Right-Pointing-Pointer Two different length scales are present in rod bundles, which must be accounted for in modeling. Black-Right-Pointing-Pointer An easy-to-implement empirical model has been developed for the two-phase friction multiplier. - Abstract: The behavior of reactor systems is predicted using advanced computational codes in order to determine the safety characteristics of the system during various accidents and to determine the performance characteristics of the reactor. These codes generally utilize the two-fluid model for predictions of two-phase flows, as this model is the most accurate and detailed model which is currently practical for predicting large-scale systems. One of the weaknesses of this approach however is the need to develop constitutive models for various quantities. Of specific interest are the models used in the prediction of void fraction and pressure drop across the rod bundle due to their importance in new Natural Circulation Boiling Water Reactor (NCBWR) designs, where these quantities determine the coolant flow rate through the core. To verify the performance of these models and expand the existing experimental database, data has been collected in an 8 Multiplication-Sign 8 rod bundle which is carefully scaled from actual BWR geometry and includes grid spacers to maintain rod spacing. While these spacer grids are 'generic', their inclusion does provide valuable data for analysis of the effect of grid spacers on the flow. In addition to pressure drop measurements the area-averaged void fraction has been measured by impedance void meters and local conductivity probes have been used to measure the local void fraction and interfacial area concentration in the bundle subchannels. Experimental conditions covered a wide range of flow rates and void fractions up to 80%.

  18. FFTF scale-model characterization of flow-induced vibrational response of reactor internals

    International Nuclear Information System (INIS)

    Ryan, J.A.; Julyk, L.J.

    1977-01-01

    As an integral part of the Fast Test Reactor Vibration Program for Reactor Internals, the flow-induced vibrational characteristics of scaled Fast Test Reactor core internal and peripheral components were assessed under scaled and simulated prototype flow conditions in the Hydraulic Core Mockup. The Hydraulic Core Mockup, a 0.285 geometric scale model, was designed to model the vibrational and hydraulic characteristics of the Fast Test Reactor. Model component vibrational characteristics were measured and determined over a range of 36 percent to 111 percent of the scaled prototype design flow. Selected model and prototype components were shaker tested to establish modal characteristics. The dynamic response of the Hydraulic Core Mockup components exhibited no anomalous flow-rate dependent or modal characteristics, and prototype response predictions were adjudged acceptable

  19. FFTF scale-model characterization of flow induced vibrational response of reactor internals

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, J A; Julyk, L J [Hanford Engineering Development Laboratory, Richland, WA (United States)

    1977-12-01

    As an integral part of the Fast Test Reactor Vibration Program for Reactor Internals, the flow-induced vibrational characteristics of scaled Fast Test Reactor core internal and peripheral components were assessed under scaled and simulated prototype flow conditions in the Hydraulic Core Mockup. The Hydraulic Core Mockup, a 0.285 geometric scale model, was designed to model the vibrational and hydraulic characteristics of the Fast Test Reactor. Model component vibrational characteristics were measured and determined over a range of 36% to 111% of the scaled prototype design flow. Selected model and prototype components were shaker tested to establish modal characteristics. The dynamic response of the Hydraulic Core Mockup components exhibited no anomalous flow-rate dependent or modal characteristics, and prototype response predictions were adjudged acceptable. (author)

  20. FFTF scale-model characterization of flow induced vibrational response of reactor internals

    International Nuclear Information System (INIS)

    Ryan, J.A.; Julyk, L.J.

    1977-01-01

    As an integral part of the Fast Test Reactor Vibration Program for Reactor Internals, the flow-induced vibrational characteristics of scaled Fast Test Reactor core internal and peripheral components were assessed under scaled and simulated prototype flow conditions in the Hydraulic Core Mockup. The Hydraulic Core Mockup, a 0.285 geometric scale model, was designed to model the vibrational and hydraulic characteristics of the Fast Test Reactor. Model component vibrational characteristics were measured and determined over a range of 36% to 111% of the scaled prototype design flow. Selected model and prototype components were shaker tested to establish modal characteristics. The dynamic response of the Hydraulic Core Mockup components exhibited no anomalous flow-rate dependent or modal characteristics, and prototype response predictions were adjudged acceptable. (author)

  1. Large scale collective modeling the final 'freeze out' stages of energetic heavy ion reactions and calculation of single particle measurables from these models

    Energy Technology Data Exchange (ETDEWEB)

    Nyiri, Agnes

    2005-07-01

    The goal of this PhD project was to develop the already existing, but far not complete Multi Module Model, specially focusing on the last module which describes the final stages of a heavy ion collision, as this module was still missing. The major original achievements summarized in this thesis correspond to the freeze out problem and calculation of an important measurable, the anisotropic flow. Summary of results: Freeze out: The importance of freeze out models is that they allow the evaluation of observables, which then can be compared to the experimental results. Therefore, it is crucial to find a realistic freeze out description, which is proved to be a non-trivial task. Recently, several kinetic freeze out models have been developed. Based on the earlier results, we have introduced new ideas and improved models, which may contribute to a more realistic description of the freeze out process. We have investigated the applicability of the Boltzmann Transport Equation (BTE) to describe dynamical freeze out. We have introduced the so-called Modified Boltzmann Transport Equation, which has a form very similar to that of the BTE, but takes into account those characteristics of the FO process which the BTE can not handle, e.g. the rapid change of the phase-space distribution function in the direction normal to the finite FO layer. We have shown that the main features of earlier ad hoc kinetic FO models can be obtained from BTE and MBTE. We have discussed the qualitative differences between the two approaches and presented some quantitative comparison as well. Since the introduced modification of the BTE makes it very difficult to solve the FO problem from the first principles, it is important to work out simplified phenomenological models, which can explain the basic features of the FO process. We have built and discussed such a model. Flow analysis: The other main subject of this thesis has been the collective flow in heavy ion collisions. Collective flow from ultra

  2. Seven challenges for model-driven data collection in experimental and observational studies

    Directory of Open Access Journals (Sweden)

    J. Lessler

    2015-03-01

    Full Text Available Infectious disease models are both concise statements of hypotheses and powerful techniques for creating tools from hypotheses and theories. As such, they have tremendous potential for guiding data collection in experimental and observational studies, leading to more efficient testing of hypotheses and more robust study designs. In numerous instances, infectious disease models have played a key role in informing data collection, including the Garki project studying malaria, the response to the 2009 pandemic of H1N1 influenza in the United Kingdom and studies of T-cell immunodynamics in mammals. However, such synergies remain the exception rather than the rule; and a close marriage of dynamic modeling and empirical data collection is far from the norm in infectious disease research. Overcoming the challenges to using models to inform data collection has the potential to accelerate innovation and to improve practice in how we deal with infectious disease threats.

  3. Small Scale Problems of the ΛCDM Model: A Short Review

    Directory of Open Access Journals (Sweden)

    Antonino Del Popolo

    2017-02-01

    Full Text Available The ΛCDM model, or concordance cosmology, as it is often called, is a paradigm at its maturity. It is clearly able to describe the universe at large scale, even if some issues remain open, such as the cosmological constant problem, the small-scale problems in galaxy formation, or the unexplained anomalies in the CMB. ΛCDM clearly shows difficulty at small scales, which could be related to our scant understanding, from the nature of dark matter to that of gravity; or to the role of baryon physics, which is not well understood and implemented in simulation codes or in semi-analytic models. At this stage, it is of fundamental importance to understand whether the problems encountered by the ΛDCM model are a sign of its limits or a sign of our failures in getting the finer details right. In the present paper, we will review the small-scale problems of the ΛCDM model, and we will discuss the proposed solutions and to what extent they are able to give us a theory accurately describing the phenomena in the complete range of scale of the observed universe.

  4. Analysis, scale modeling, and full-scale test of a railcar and spent-nuclear-fuel shipping cask in a high-velocity impact against a rigid barrier

    International Nuclear Information System (INIS)

    Huerta, M.

    1981-06-01

    This report describes the mathematical analysis, the physical scale modeling, and a full-scale crash test of a railcar spent-nuclear-fuel shipping system. The mathematical analysis utilized a lumped-parameter model to predict the structural response of the railcar and the shipping cask. The physical scale modeling analysis consisted of two crash tests that used 1/8-scale models to assess railcar and shipping cask damage. The full-scale crash test, conducted with retired railcar equipment, was carefully monitored with onboard instrumentation and high-speed photography. Results of the mathematical and scale modeling analyses are compared with the full-scale test. 29 figures

  5. Verification of Simulation Results Using Scale Model Flight Test Trajectories

    National Research Council Canada - National Science Library

    Obermark, Jeff

    2004-01-01

    .... A second compromise scaling law was investigated as a possible improvement. For ejector-driven events at minimum sideslip, the most important variables for scale model construction are the mass moment of inertia and ejector...

  6. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to

  7. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  8. A general model for metabolic scaling in self-similar asymmetric networks.

    Directory of Open Access Journals (Sweden)

    Alexander Byers Brummer

    2017-03-01

    Full Text Available How a particular attribute of an organism changes or scales with its body size is known as an allometry. Biological allometries, such as metabolic scaling, have been hypothesized to result from selection to maximize how vascular networks fill space yet minimize internal transport distances and resistances. The West, Brown, Enquist (WBE model argues that these two principles (space-filling and energy minimization are (i general principles underlying the evolution of the diversity of biological networks across plants and animals and (ii can be used to predict how the resulting geometry of biological networks then governs their allometric scaling. Perhaps the most central biological allometry is how metabolic rate scales with body size. A core assumption of the WBE model is that networks are symmetric with respect to their geometric properties. That is, any two given branches within the same generation in the network are assumed to have identical lengths and radii. However, biological networks are rarely if ever symmetric. An open question is: Does incorporating asymmetric branching change or influence the predictions of the WBE model? We derive a general network model that relaxes the symmetric assumption and define two classes of asymmetrically bifurcating networks. We show that asymmetric branching can be incorporated into the WBE model. This asymmetric version of the WBE model results in several theoretical predictions for the structure, physiology, and metabolism of organisms, specifically in the case for the cardiovascular system. We show how network asymmetry can now be incorporated in the many allometric scaling relationships via total network volume. Most importantly, we show that the 3/4 metabolic scaling exponent from Kleiber's Law can still be attained within many asymmetric networks.

  9. Multi-scale modelling of the hydro-mechanical behaviour of argillaceous rocks

    International Nuclear Information System (INIS)

    Van den Eijnden, Bram

    2015-01-01

    Feasibility studies for deep geological radioactive waste disposal facilities have led to an increased interest in the geomechanical modelling of its host rock. In France, a potential host rock is the Callovo-Oxfordian clay-stone. The low permeability of this material is of key importance, as the principle of deep geological disposal strongly relies on the sealing capacity of the host formation. The permeability being coupled to the mechanical material state, hydro-mechanical coupled behaviour of the clay-stone becomes important when mechanical alterations are induced by gallery excavation in the so-called excavation damaged zone (EDZ). In materials with microstructure such as the Callovo-Oxfordian clay-stone, the macroscopic behaviour has its origin in the interaction of its micromechanical constituents. In addition to the coupling between hydraulic and mechanical behaviour, a coupling between the micro (material microstructure) and macro scale will be made. By means of the development of a framework of computational homogenization for hydro-mechanical coupling, a double-scale modelling approach is formulated, for which the macro-scale constitutive relations are derived from the microscale by homogenization. An existing model for the modelling of hydro-mechanical coupling based on the distinct definition of grains and intergranular pore space is adopted and modified to enable the application of first order computational homogenization for obtaining macro-scale stress and fluid transport responses. This model is used to constitute a periodic representative elementary volume (REV) that allows the representation of the local macroscopic behaviour of the clay-stone. As a response to deformation loading, the behaviour of the REV represents the numerical equivalent of a constitutive relation at the macro-scale. For the required consistent tangent operators, the framework of computational homogenization by static condensation is extended to hydro-mechanical coupling. The

  10. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    International Nuclear Information System (INIS)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-01-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  11. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    Science.gov (United States)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  12. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    Energy Technology Data Exchange (ETDEWEB)

    Reyes, Luz M., E-mail: luzmarinareyes@gmail.com [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Moreno, Claudia, E-mail: claudia.moreno@cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Madriz Aguilar, Jose Edgar, E-mail: edgar.madriz@red.cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Bellini, Mauricio, E-mail: mbellini@mdp.edu.ar [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata (UNMdP), Funes 3350, C.P. 7600, Mar del Plata (Argentina); Instituto de Investigaciones Fisicas de Mar del Plata (IFIMAR) - Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)

    2012-10-22

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  13. Vibrational collective model for spheric even-even nuclei

    International Nuclear Information System (INIS)

    Cruz, M.T.F. da.

    1985-01-01

    A review is made on the evidences of collective motions in spherical even-even nuclei. The several multipole transitions occuring in such a nuclei are discussed. Some hypothesis which are necessary in order to build-up the model are presented. (L.C.) [pt

  14. Modeling of a Large-Scale High Temperature Regenerative Sulfur Removal Process

    DEFF Research Database (Denmark)

    Konttinen, Jukka T.; Johnsson, Jan Erik

    1999-01-01

    model that does not account for bed hydrodynamics. The pilot-scale test run results, obtained in the test runs of the sulfur removal process with real coal gasifier gas, have been used for parameter estimation. The validity of the reactor model for commercial-scale design applications is discussed.......Regenerable mixed metal oxide sorbents are prime candidates for the removal of hydrogen sulfide from hot gasifier gas in the simplified integrated gasification combined cycle (IGCC) process. As part of the regenerative sulfur removal process development, reactor models are needed for scale......-up. Steady-state kinetic reactor models are needed for reactor sizing, and dynamic models can be used for process control design and operator training. The regenerative sulfur removal process to be studied in this paper consists of two side-by-side fluidized bed reactors operating at temperatures of 400...

  15. Full scale model studies of nuclear power stations for earthquake resistance

    International Nuclear Information System (INIS)

    Kirillov, A.P.; Ambriashvili, Ju. K.; Kozlov, A.V.

    Behaviour of nuclear power plants and its equipments under seismic action is not well understood. In the absence of well established method for aseismic deisgn of nuclear power plants and its equipments, it is necessary to carry out experimental investigations on models, fragments and full scale structures. The present study includes experimental investigations of different scale models and on existing nuclear power stations under impulse and explosion effects simulating seismic loads. The experimental work was aimed to develop on model test procedure for nuclear power station and the evaluation of the possible range of dynamic stresses in structures and pipe lines. The results of full-scale investigations of the nuclear reactor show a good agreement of dynamic characteristics of the model and the prototype. The study confirms the feasibility of simulation of model for nuclear power plants. (auth.)

  16. Patient participation in collective healthcare decision making: the Dutch model

    NARCIS (Netherlands)

    van de Bovenkamp, H.; Trappenburg, M.J.; Grit, K.

    2010-01-01

    Objective To study whether the Dutch participation model is a good model of participation. Background Patient participation is on the agenda, both on the individual and the collective level. In this study, we focus on the latter by looking at the Dutch model in which patient organizations are

  17. Patient participation in collective healthcare decision making: the Dutch model

    NARCIS (Netherlands)

    van de Bovenkamp, H.M.; Trappenburg, M.J.; Grit, K.J.

    2010-01-01

    Objective  To study whether the Dutch participation model is a good model of participation. Background  Patient participation is on the agenda, both on the individual and the collective level. In this study, we focus on the latter by looking at the Dutch model in which patient organizations are

  18. Patient participation in collective healthcare decision making: the Dutch model

    NARCIS (Netherlands)

    Bovenkamp, H. van de; Trappenburg, M.J.; Grit, K. J.

    2009-01-01

    Objective To study whether the Dutch participation model is a good model of participation. Background Patient participation is on the agenda, both on the individual and the collective level. In this study, we focus on the latter by looking at the Dutch model in which patient organizations are

  19. Modeling fast and slow earthquakes at various scales.

    Science.gov (United States)

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  20. Coalescing colony model: Mean-field, scaling, and geometry

    Science.gov (United States)

    Carra, Giulia; Mallick, Kirone; Barthelemy, Marc

    2017-12-01

    We analyze the coalescing model where a `primary' colony grows and randomly emits secondary colonies that spread and eventually coalesce with it. This model describes population proliferation in theoretical ecology, tumor growth, and is also of great interest for modeling urban sprawl. Assuming the primary colony to be always circular of radius r (t ) and the emission rate proportional to r (t) θ , where θ >0 , we derive the mean-field equations governing the dynamics of the primary colony, calculate the scaling exponents versus θ , and compare our results with numerical simulations. We then critically test the validity of the circular approximation for the colony shape and show that it is sound for a constant emission rate (θ =0 ). However, when the emission rate is proportional to the perimeter, the circular approximation breaks down and the roughness of the primary colony cannot be discarded, thus modifying the scaling exponents.

  1. Validity of the Neuromuscular Recovery Scale: a measurement model approach.

    Science.gov (United States)

    Velozo, Craig; Moorhouse, Michael; Ardolino, Elizabeth; Lorenz, Doug; Suter, Sarah; Basso, D Michele; Behrman, Andrea L

    2015-08-01

    To determine how well the Neuromuscular Recovery Scale (NRS) items fit the Rasch, 1-parameter, partial-credit measurement model. Confirmatory factor analysis (CFA) and principal components analysis (PCA) of residuals were used to determine dimensionality. The Rasch, 1-parameter, partial-credit rating scale model was used to determine rating scale structure, person/item fit, point-measure item correlations, item discrimination, and measurement precision. Seven NeuroRecovery Network clinical sites. Outpatients (N=188) with spinal cord injury. Not applicable. NRS. While the NRS met 1 of 3 CFA criteria, the PCA revealed that the Rasch measurement dimension explained 76.9% of the variance. Ten of 11 items and 91% of the patients fit the Rasch model, with 9 of 11 items showing high discrimination. Sixty-nine percent of the ratings met criteria. The items showed a logical item-difficulty order, with Stand retraining as the easiest item and Walking as the most challenging item. The NRS showed no ceiling or floor effects and separated the sample into almost 5 statistically distinct strata; individuals with an American Spinal Injury Association Impairment Scale (AIS) D classification showed the most ability, and those with an AIS A classification showed the least ability. Items not meeting the rating scale criteria appear to be related to the low frequency counts. The NRS met many of the Rasch model criteria for construct validity. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  2. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  3. SCALING ANALYSIS OF REPOSITORY HEAT LOAD FOR REDUCED DIMENSIONALITY MODELS

    International Nuclear Information System (INIS)

    MICHAEL T. ITAMUA AND CLIFFORD K. HO

    1998-01-01

    The thermal energy released from the waste packages emplaced in the potential Yucca Mountain repository is expected to result in changes in the repository temperature, relative humidity, air mass fraction, gas flow rates, and other parameters that are important input into the models used to calculate the performance of the engineered system components. In particular, the waste package degradation models require input from thermal-hydrologic models that have higher resolution than those currently used to simulate the T/H responses at the mountain-scale. Therefore, a combination of mountain- and drift-scale T/H models is being used to generate the drift thermal-hydrologic environment

  4. Privacy in Sensor-Driven Human Data Collection: A Guide for Practitioners

    OpenAIRE

    Stopczynski, Arkadiusz; Pietri, Riccardo; Pentland, Alex; Lazer, David; Lehmann, Sune

    2014-01-01

    In recent years, the amount of information collected about human beings has increased dramatically. This development has been partially driven by individuals posting and storing data about themselves and friends using online social networks or collecting their data for self-tracking purposes (quantified-self movement). Across the sciences, researchers conduct studies collecting data with an unprecedented resolution and scale. Using computational power combined with mathematical models, such r...

  5. Collective firing regularity of a scale-free Hodgkin–Huxley neuronal network in response to a subthreshold signal

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Ergin, E-mail: erginyilmaz@yahoo.com [Department of Biomedical Engineering, Engineering Faculty, Bülent Ecevit University, 67100 Zonguldak (Turkey); Ozer, Mahmut [Department of Electrical and Electronics Engineering, Engineering Faculty, Bülent Ecevit University, 67100 Zonguldak (Turkey)

    2013-08-01

    We consider a scale-free network of stochastic HH neurons driven by a subthreshold periodic stimulus and investigate how the collective spiking regularity or the collective temporal coherence changes with the stimulus frequency, the intrinsic noise (or the cell size), the network average degree and the coupling strength. We show that the best temporal coherence is obtained for a certain level of the intrinsic noise when the frequencies of the external stimulus and the subthreshold oscillations of the network elements match. We also find that the collective regularity exhibits a resonance-like behavior depending on both the coupling strength and the network average degree at the optimal values of the stimulus frequency and the cell size, indicating that the best temporal coherence also requires an optimal coupling strength and an optimal average degree of the connectivity.

  6. Meso-scale effects of tropical deforestation in Amazonia: preparatory LBA modelling studies

    Directory of Open Access Journals (Sweden)

    A. J. Dolman

    1999-08-01

    Full Text Available As part of the preparation for the Large-Scale Biosphere Atmosphere Experiment in Amazonia, a meso-scale modelling study was executed to highlight deficiencies in the current understanding of land surface atmosphere interaction at local to sub-continental scales in the dry season. Meso-scale models were run in 1-D and 3-D mode for the area of Rondonia State, Brazil. The important conclusions are that without calibration it is difficult to model the energy partitioning of pasture; modelling that of forest is easier due to the absence of a strong moisture deficit signal. The simulation of the boundary layer above forest is good, above deforested areas (pasture poor. The models' underestimate of the temperature of the boundary layer is likely to be caused by the neglect of the radiative effects of aerosols caused by biomass burning, but other factors such as lack of sufficient entrainment in the model at the mixed layer top may also contribute. The Andes generate patterns of subsidence and gravity waves, the effects of which are felt far into the Rondonian area The results show that the picture presented by GCM modelling studies may need to be balanced by an increased understanding of what happens at the meso-scale. The results are used to identify key measurements for the LBA atmospheric meso-scale campaign needed to improve the model simulations. Similar modelling studies are proposed for the wet season in Rondonia, when convection plays a major role.Key words. Atmospheric composition and structure (aerosols and particles; biosphere-atmosphere interactions · Meterology and atmospheric dynamics (mesoscale meterology

  7. Meso-scale effects of tropical deforestation in Amazonia: preparatory LBA modelling studies

    Directory of Open Access Journals (Sweden)

    A. J. Dolman

    Full Text Available As part of the preparation for the Large-Scale Biosphere Atmosphere Experiment in Amazonia, a meso-scale modelling study was executed to highlight deficiencies in the current understanding of land surface atmosphere interaction at local to sub-continental scales in the dry season. Meso-scale models were run in 1-D and 3-D mode for the area of Rondonia State, Brazil. The important conclusions are that without calibration it is difficult to model the energy partitioning of pasture; modelling that of forest is easier due to the absence of a strong moisture deficit signal. The simulation of the boundary layer above forest is good, above deforested areas (pasture poor. The models' underestimate of the temperature of the boundary layer is likely to be caused by the neglect of the radiative effects of aerosols caused by biomass burning, but other factors such as lack of sufficient entrainment in the model at the mixed layer top may also contribute. The Andes generate patterns of subsidence and gravity waves, the effects of which are felt far into the Rondonian area The results show that the picture presented by GCM modelling studies may need to be balanced by an increased understanding of what happens at the meso-scale. The results are used to identify key measurements for the LBA atmospheric meso-scale campaign needed to improve the model simulations. Similar modelling studies are proposed for the wet season in Rondonia, when convection plays a major role.

    Key words. Atmospheric composition and structure (aerosols and particles; biosphere-atmosphere interactions · Meterology and atmospheric dynamics (mesoscale meterology

  8. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  9. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DEFF Research Database (Denmark)

    King, Zachary A.; Lu, Justin; Dräger, Andreas

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repo...

  10. A confirmatory test of the underlying factor structure of scores on the collective self-esteem scale in two independent samples of Black Americans.

    Science.gov (United States)

    Utsey, Shawn O; Constantine, Madonna G

    2006-04-01

    In this study, we examined the factor structure of the Collective Self-Esteem Scale (CSES; Luhtanen & Crocker, 1992) across 2 separate samples of Black Americans. The CSES was administered to a sample of Black American adolescents (n = 538) and a community sample of Black American adults (n = 313). Results of confirmatory factor analyses (CFAs), however, did not support the original 4-factor model identified by Luhtanen and Crocker (1992) as providing an adequate fit to the data for these samples. Furthermore, an exploratory CFA procedure failed to find a CSES factor structure that could be replicated across the 2 samples of Black Americans. We present and discuss implications of the findings.

  11. Monitoring strategies and scale appropriate hydrologic and biogeochemical modelling for natural resource management

    DEFF Research Database (Denmark)

    Bende-Michl, Ulrike; Volk, Martin; Harmel, Daren

    2011-01-01

    This short communication paper presents recommendations for developing scale-appropriate monitoring and modelling strategies to assist decision making in natural resource management (NRM). These ideas presented here were discussed in the session (S5) ‘Monitoring strategies and scale...... and communication between researcher and model developer on the one side, and natural resource managers and the model users on the other side to increase knowledge in: 1) the limitations and uncertainties of current monitoring and modelling strategies, 2) scale-dependent linkages between monitoring and modelling...

  12. A tangential CO{sub 2} laser collective scattering system for measuring short-scale turbulent fluctuations in the EAST superconducting tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Cao, G.M., E-mail: gmcao@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, PO Box 1126, Hefei, Anhui 230031 (China); Li, Y.D. [Institute of Plasma Physics, Chinese Academy of Sciences, PO Box 1126, Hefei, Anhui 230031 (China); Li, Q. [School of Physics and Optoelectronic Engineering, Guangdong University of Technology, Guangzhou 510006 (China); Zhang, X.D.; Sun, P.J.; Wu, G.J.; Hu, L.Q. [Institute of Plasma Physics, Chinese Academy of Sciences, PO Box 1126, Hefei, Anhui 230031 (China)

    2014-12-15

    Highlights: • A tangential CO{sub 2} laser collective scattering system was first installed on EAST. • It can measure the short-scale fluctuations in different regions simultaneously. • It can study the broadband fluctuations, QC fluctuations, MHD phenomenon, etc. - Abstract: A tangential CO{sub 2} laser collective scattering system has been first installed on the Experimental Advanced Superconducting Tokamak (EAST) to measure short-scale turbulent fluctuations in EAST plasmas. The system can measure fluctuations with up to four distinct wavenumbers simultaneously ranging from 10 cm{sup −1} to 26 cm{sup −1}, and correspondingly k{sub ⊥}ρ{sub s}∼1.5−4.3. The system is designed based on the oblique propagation of the probe beam with respect to the magnetic field, and thus the enhanced spatial localization can be achieved by taking full advantage of turbulence anisotropy and magnetic field inhomogeneity. The simultaneous measurements of turbulent fluctuations in different regions can be taken by special optical setup. Initial measurements indicate rich short-scale turbulent dynamics in both core and outer regions of EAST plasmas. The system will be a powerful tool for investigating the features of short-scale turbulent fluctuations in EAST plasmas.

  13. Development of Simplified and Dynamic Model for Double Glazing Unit Validated with Full-Scale Facade Element

    DEFF Research Database (Denmark)

    Liu, Mingzhe; Wittchen, Kim Bjarne; Heiselberg, Per

    2012-01-01

    The project aims at developing simplified calculation methods for the different features that influence energy demand and indoor environment behind “intelligent” glazed façades. This paper describes how to set up simplified model to calculate the thermal and solar properties (U and g value......) together with comfort performance (internal surface temperature of the glazing) of a double glazing unit. Double glazing unit is defined as 1D model with nodes representing different layers of material. Several models with different number of nodes and position of these are compared and verified in order...... to find a simplified method which can calculate the performance as accurately as possible. The calculated performance in terms of internal surface temperature is verified with experimental data collected in a full-scale façade element test facility at Aalborg University (DK). The advantage...

  14. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  15. Large-scale model-based assessment of deer-vehicle collision risk.

    Directory of Open Access Journals (Sweden)

    Torsten Hothorn

    Full Text Available Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining

  16. Scale genesis and gravitational wave in a classically scale invariant extension of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Kubo, Jisuke [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Yamada, Masatoshi [Department of Physics, Kyoto University,Kyoto 606-8502 (Japan); Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, 69120 Heidelberg (Germany)

    2016-12-01

    We assume that the origin of the electroweak (EW) scale is a gauge-invariant scalar-bilinear condensation in a strongly interacting non-abelian gauge sector, which is connected to the standard model via a Higgs portal coupling. The dynamical scale genesis appears as a phase transition at finite temperature, and it can produce a gravitational wave (GW) background in the early Universe. We find that the critical temperature of the scale phase transition lies above that of the EW phase transition and below few O(100) GeV and it is strongly first-order. We calculate the spectrum of the GW background and find the scale phase transition is strong enough that the GW background can be observed by DECIGO.

  17. Collective motion of active Brownian particles with polar alignment.

    Science.gov (United States)

    Martín-Gómez, Aitor; Levis, Demian; Díaz-Guilera, Albert; Pagonabarraga, Ignacio

    2018-04-04

    We present a comprehensive computational study of the collective behavior emerging from the competition between self-propulsion, excluded volume interactions and velocity-alignment in a two-dimensional model of active particles. We consider an extension of the active brownian particles model where the self-propulsion direction of the particles aligns with the one of their neighbors. We analyze the onset of collective motion (flocking) in a low-density regime (10% surface area) and show that it is mainly controlled by the strength of velocity-alignment interactions: the competition between self-propulsion and crowding effects plays a minor role in the emergence of flocking. However, above the flocking threshold, the system presents a richer pattern formation scenario than analogous models without alignment interactions (active brownian particles) or excluded volume effects (Vicsek-like models). Depending on the parameter regime, the structure of the system is characterized by either a broad distribution of finite-sized polar clusters or the presence of an amorphous, highly fluctuating, large-scale traveling structure which can take a lane-like or band-like form (and usually a hybrid structure which is halfway in between both). We establish a phase diagram that summarizes collective behavior of polar active brownian particles and propose a generic mechanism to describe the complexity of the large-scale structures observed in systems of repulsive self-propelled particles.

  18. Development of the Artistic Supervision Model Scale (ASMS)

    Science.gov (United States)

    Kapusuzoglu, Saduman; Dilekci, Umit

    2017-01-01

    The purpose of the study is to develop the Artistic Supervision Model Scale in accordance with the perception of inspectors and the elementary and secondary school teachers on artistic supervision. The lack of a measuring instrument related to the model of artistic supervision in the field of literature reveals the necessity of such study. 290…

  19. Testing of materials and scale models for impact limiters

    International Nuclear Information System (INIS)

    Maji, A.K.; Satpathi, D.; Schryer, H.L.

    1991-01-01

    Aluminum Honeycomb and Polyurethane foam specimens were tested to obtain experimental data on the material's behavior under different loading conditions. This paper reports the dynamic tests conducted on the materials and on the design and testing of scale models made out of these open-quotes Impact Limiters,close quotes as they are used in the design of transportation casks. Dynamic tests were conducted on a modified Charpy Impact machine with associated instrumentation, and compared with static test results. A scale model testing setup was designed and used for preliminary tests on models being used by current designers of transportation casks. The paper presents preliminary results of the program. Additional information will be available and reported at the time of presentation of the paper

  20. A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data.

    Science.gov (United States)

    Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing

    2015-01-01

    Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing.

  1. A MEDL Collection Showcase: A Collection of Hands-on Physical Analog Models and Demonstrations From the Department of Geosciences MEDL at Virginia Tech

    Science.gov (United States)

    Glesener, G. B.

    2017-12-01

    The Geosciences Modeling and Educational Demonstrations Laboratory (MEDL) will present a suite of hands-on physical analog models from our curriculum materials collection used to teach about a wide range of geoscience processes. Many of the models will be equipped with Vernier data collection sensors, which visitors will be encouraged to explore on-site. Our goal is to spark interest and discussion around the affordances of these kinds of curriculum materials. Important topics to discuss will include: (1) How can having a collection of hands-on physical analog models be used to effectively produce successful broader impacts activities for research proposals? (2) What kinds of learning outcomes have instructors observed when teaching about temporally and spatially challenging concepts using physical analog models? (3) What does it take for an institution to develop their own MEDL collection? and (4) How can we develop a community of individuals who provide on-the-ground support for instructors who use physical analog models in their classroom.

  2. A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model

    Science.gov (United States)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data

  3. Microsatellite diversity and broad scale geographic structure in a model legume: building a set of nested core collection for studying naturally occurring variation in Medicago truncatula

    DEFF Research Database (Denmark)

    Ronfort, Joelle; Bataillon, Thomas; Santoni, Sylvain

    2006-01-01

    at representing the genetic diversity of this species with a minimum of repetitiveness. We investigate the patterns of genetic diversity and population structure in a collection of 346 inbred lines representing the breadth of naturally occurring diversity in the Legume plant model Medicago truncatula using 13...... of inbred lines and the core collections are publicly available and will help coordinating efforts for the study of naturally occurring variation in the growing Medicago truncatula community....

  4. Y-Scaling in a simple quark model

    International Nuclear Information System (INIS)

    Kumano, S.; Moniz, E.J.

    1988-01-01

    A simple quark model is used to define a nuclear pair model, that is, two composite hadrons interacting only through quark interchange and bound in an overall potential. An ''equivalent'' hadron model is developed, displaying an effective hadron-hadron interaction which is strongly repulsive. We compare the effective hadron model results with the exact quark model observables in the kinematic region of large momentum transfer, small energy transfer. The nucleon reponse function in this y-scaling region is, within the traditional frame work sensitive to the nucleon momentum distribution at large momentum. We find a surprizingly small effect of hadron substructure. Furthermore, we find in our model that a simple parametrization of modified hadron size in the bound state, motivated by the bound quark momentum distribution, is not a useful way to correlate different observables

  5. Improving Snow Modeling by Assimilating Observational Data Collected by Citizen Scientists

    Science.gov (United States)

    Crumley, R. L.; Hill, D. F.; Arendt, A. A.; Wikstrom Jones, K.; Wolken, G. J.; Setiawan, L.

    2017-12-01

    Modeling seasonal snow pack in alpine environments includes a multiplicity of challenges caused by a lack of spatially extensive and temporally continuous observational datasets. This is partially due to the difficulty of collecting measurements in harsh, remote environments where extreme gradients in topography exist, accompanied by large model domains and inclement weather. Engaging snow enthusiasts, snow professionals, and community members to participate in the process of data collection may address some of these challenges. In this study, we use SnowModel to estimate seasonal snow water equivalence (SWE) in the Thompson Pass region of Alaska while incorporating snow depth measurements collected by citizen scientists. We develop a modeling approach to assimilate hundreds of snow depth measurements from participants in the Community Snow Observations (CSO) project (www.communitysnowobs.org). The CSO project includes a mobile application where participants record and submit geo-located snow depth measurements while working and recreating in the study area. These snow depth measurements are randomly located within the model grid at irregular time intervals over the span of four months in the 2017 water year. This snow depth observation dataset is converted into a SWE dataset by employing an empirically-based, bulk density and SWE estimation method. We then assimilate this data using SnowAssim, a sub-model within SnowModel, to constrain the SWE output by the observed data. Multiple model runs are designed to represent an array of output scenarios during the assimilation process. An effort to present model output uncertainties is included, as well as quantification of the pre- and post-assimilation divergence in modeled SWE. Early results reveal pre-assimilation SWE estimations are consistently greater than the post-assimilation estimations, and the magnitude of divergence increases throughout the snow pack evolution period. This research has implications beyond the

  6. Appropriatie spatial scales to achieve model output uncertainty goals

    NARCIS (Netherlands)

    Booij, Martijn J.; Melching, Charles S.; Chen, Xiaohong; Chen, Yongqin; Xia, Jun; Zhang, Hailun

    2008-01-01

    Appropriate spatial scales of hydrological variables were determined using an existing methodology based on a balance in uncertainties from model inputs and parameters extended with a criterion based on a maximum model output uncertainty. The original methodology uses different relationships between

  7. Railway bogie vibration analysis by mathematical simulation model and a scaled four-wheel railway bogie set

    Science.gov (United States)

    Visayataksin, Noppharat; Sooklamai, Manon

    2018-01-01

    The bogie is the part that connects and transfers all the load from the vehicle body onto the railway track; interestingly the interaction between wheels and rails is the critical point for derailment of the rail vehicles. However, observing or experimenting with real bogies on rail vehicles is impossible due to the operational rules and safety concerns. Therefore, this research aimed to develop a vibration analysis set for a four-wheel railway bogie by constructing a four-wheel bogie with scale of 1:4.5. The bogie structures, including wheels and axles, were made from an aluminium alloy, equipped with springs and dampers. The bogie was driven by an electric motor using 4 round wheels instead of 2 straight rails, with linear velocity between 0 to 11.22 m/s. The data collected from the vibration analysis set was compared to the mathematical simulation model to investigate the vibration behavior of the bogie, especially the hunting motion. The results showed that vibration behavior from a scaled four-wheel railway bogie set significantly agreed with the mathematical simulation model in terms of displacement and hunting frequency. The critical speed of the wheelset was found by executing the mathematical simulation model at 13 m/s.

  8. Report on US-DOE/OHER Task Group on modelling and scaling

    International Nuclear Information System (INIS)

    Mewhinney, J.A.; Griffith, W.C.

    1989-01-01

    In early 1986, the DOE/OHER Task Group on Modeling and Scaling was formed. Membership on the Task Group is drawn from staff of several laboratories funded by the United States Department of Energy, Office of Health and Environmental Research. The primary goal of the Task Group is to promote cooperation among the laboratories in analysing mammalian radiobiology studies with emphasis on studies that used beagle dogs in linespan experiments. To assist in defining the status of modelling and scaling in animal data, the Task Group served as the programme committee for the 26th Hanford Life Sciences symposium entitled Modeling for Scaling to Man held in October 1987. This symposium had over 60 oral presentations describing current research in dosimetric, pharmacokinetic, and dose-response modelling and scaling of results from animal studies to humans. A summary of the highlights of this symposium is presented. The Task Group also is in the process of developing recommendations for analyses of results obtained from dog lifespan studies. The goal is to provide as many comparisons as possible between these studies and to scale the results to humans to strengthen limited epidemiological data on exposures of humans to radiation. Several methods are discussed. (author)

  9. 9 m side drop test of scale model

    International Nuclear Information System (INIS)

    Ku, Jeong-Hoe; Chung, Seong-Hwan; Lee, Ju-Chan; Seo, Ki-Seog

    1993-01-01

    A type B(U) shipping cask had been developed in KAERI for transporting PWR spent fuel. Since the cask is to transport spent PWR fuel, it must be designed to meet all of the structural requirements specified in domestic packaging regulations and IAEA safety series No.6. This paper describes the side drop testing of a one - third scale model cask. The crush and deformations of the shock absorbing covers directly control the deceleration experiences of the cask during the 9 m side drop impact. The shock absorbing covers greatly mitigated the inertia forces of the cask body due to the side drop impact. Compared with the side drop test and finite element analysis, it was verified that the 1/3 scale model cask maintain its structural integrity of the model cask under the side drop impact. The test and analysis results could be used as the basic data to evaluate the structural integrity of the real cask. (J.P.N.)

  10. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    Science.gov (United States)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1

  11. Virtual model of an automated system for the storage of collected waste

    Directory of Open Access Journals (Sweden)

    Enciu George

    2017-01-01

    Full Text Available One of the problems identified in waste collection integrated systems is the storage space. The design process of an automated system for the storage of collected waste includes finding solutions for the optimal exploitation of the limited storage space, seen that the equipment for the loading, identification, transport and transfer of the waste covers most of the available space inside the integrated collection system. In the present paper a three-dimensional model of an automated storage system designed by the authors for a business partner is presented. The storage system can be used for the following types of waste: plastic and glass recipients, aluminium cans, paper, cardboard and WEEE (waste electrical and electronic equipment. Special attention has been given to the transfer subsystem, specific for the storage system, which should be able to transfer different types and shapes of waste. The described virtual model of the automated system for the storage of collected waste will be part of the virtual model of the entire integrated waste collection system as requested by the beneficiary.

  12. URBAN MORPHOLOGY FOR HOUSTON TO DRIVE MODELS-3/CMAQ AT NEIGHBORHOOD SCALES

    Science.gov (United States)

    Air quality simulation models applied at various horizontal scales require different degrees of treatment in the specifications of the underlying surfaces. As we model neighborhood scales ( 1 km horizontal grid spacing), the representation of urban morphological structures (e....

  13. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  14. Modeling the effects of LID practices on streams health at watershed scale

    Science.gov (United States)

    Shannak, S.; Jaber, F. H.

    2013-12-01

    Increasing impervious covers due to urbanization will lead to an increase in runoff volumes, and eventually increase flooding. Stream channels adjust by widening and eroding stream bank which would impact downstream property negatively (Chin and Gregory, 2001). Also, urban runoff drains in sediment bank areas in what's known as riparian zones and constricts stream channels (Walsh, 2009). Both physical and chemical factors associated with urbanization such as high peak flows and low water quality further stress aquatic life and contribute to overall biological condition of urban streams (Maxted et al., 1995). While LID practices have been mentioned and studied in literature for stormwater management, they have not been studied in respect to reducing potential impact on stream health. To evaluate the performance and the effectiveness of LID practices at a watershed scale, sustainable detention pond, bioretention, and permeable pavement will be modeled at watershed scale. These measures affect the storm peak flows and base flow patterns over long periods, and there is a need to characterize their effect on stream bank and bed erosion, and aquatic life. These measures will create a linkage between urban watershed development and stream conditions specifically biological health. The first phase of this study is to design and construct LID practices at the Texas A&M AgriLife Research and Extension Center-Dallas, TX to collect field data about the performance of these practices on a smaller scale. The second phase consists of simulating the performance of LID practices on a watershed scale. This simulation presents a long term model (23 years) using SWAT to evaluate the potential impacts of these practices on; potential stream bank and bed erosion, and potential impact on aquatic life in the Blunn Watershed located in Austin, TX. Sub-daily time step model simulations will be developed to simulate the effectiveness of the three LID practices with respect to reducing

  15. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  16. Pesticide fate on catchment scale: conceptual modelling of stream CSIA data

    Science.gov (United States)

    Lutz, Stefanie R.; van der Velde, Ype; Elsayed, Omniea F.; Imfeld, Gwenaël; Lefrancq, Marie; Payraudeau, Sylvain; van Breukelen, Boris M.

    2017-10-01

    Compound-specific stable isotope analysis (CSIA) has proven beneficial in the characterization of contaminant degradation in groundwater, but it has never been used to assess pesticide transformation on catchment scale. This study presents concentration and carbon CSIA data of the herbicides S-metolachlor and acetochlor from three locations (plot, drain, and catchment outlets) in a 47 ha agricultural catchment (Bas-Rhin, France). Herbicide concentrations at the catchment outlet were highest (62 µg L-1) in response to an intense rainfall event following herbicide application. Increasing δ13C values of S-metolachlor and acetochlor by more than 2 ‰ during the study period indicated herbicide degradation. To assist the interpretation of these data, discharge, concentrations, and δ13C values of S-metolachlor were modelled with a conceptual mathematical model using the transport formulation by travel-time distributions. Testing of different model setups supported the assumption that degradation half-lives (DT50) increase with increasing soil depth, which can be straightforwardly implemented in conceptual models using travel-time distributions. Moreover, model calibration yielded an estimate of a field-integrated isotopic enrichment factor as opposed to laboratory-based assessments of enrichment factors in closed systems. Thirdly, the Rayleigh equation commonly applied in groundwater studies was tested by our model for its potential to quantify degradation on catchment scale. It provided conservative estimates on the extent of degradation as occurred in stream samples. However, largely exceeding the simulated degradation within the entire catchment, these estimates were not representative of overall degradation on catchment scale. The conceptual modelling approach thus enabled us to upscale sample-based CSIA information on degradation to the catchment scale. Overall, this study demonstrates the benefit of combining monitoring and conceptual modelling of concentration

  17. Relevance of multiple spatial scales in habitat models: A case study with amphibians and grasshoppers

    Science.gov (United States)

    Altmoos, Michael; Henle, Klaus

    2010-11-01

    Habitat models for animal species are important tools in conservation planning. We assessed the need to consider several scales in a case study for three amphibian and two grasshopper species in the post-mining landscapes near Leipzig (Germany). The two species groups were selected because habitat analyses for grasshoppers are usually conducted on one scale only whereas amphibians are thought to depend on more than one spatial scale. First, we analysed how the preference to single habitat variables changed across nested scales. Most environmental variables were only significant for a habitat model on one or two scales, with the smallest scale being particularly important. On larger scales, other variables became significant, which cannot be recognized on lower scales. Similar preferences across scales occurred in only 13 out of 79 cases and in 3 out of 79 cases the preference and avoidance for the same variable were even reversed among scales. Second, we developed habitat models by using a logistic regression on every scale and for all combinations of scales and analysed how the quality of habitat models changed with the scales considered. To achieve a sufficient accuracy of the habitat models with a minimum number of variables, at least two scales were required for all species except for Bufo viridis, for which a single scale, the microscale, was sufficient. Only for the European tree frog ( Hyla arborea), at least three scales were required. The results indicate that the quality of habitat models increases with the number of surveyed variables and with the number of scales, but costs increase too. Searching for simplifications in multi-scaled habitat models, we suggest that 2 or 3 scales should be a suitable trade-off, when attempting to define a suitable microscale.

  18. A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media

    KAUST Repository

    Chen, Huangxin; Sun, Shuyu

    2016-01-01

    scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved

  19. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...

  20. A dynamic global-coefficient mixed subgrid-scale model for large-eddy simulation of turbulent flows

    International Nuclear Information System (INIS)

    Singh, Satbir; You, Donghyun

    2013-01-01

    Highlights: ► A new SGS model is developed for LES of turbulent flows in complex geometries. ► A dynamic global-coefficient SGS model is coupled with a scale-similarity model. ► Overcome some of difficulties associated with eddy-viscosity closures. ► Does not require averaging or clipping of the model coefficient for stabilization. ► The predictive capability is demonstrated in a number of turbulent flow simulations. -- Abstract: A dynamic global-coefficient mixed subgrid-scale eddy-viscosity model for large-eddy simulation of turbulent flows in complex geometries is developed. In the present model, the subgrid-scale stress is decomposed into the modified Leonard stress, cross stress, and subgrid-scale Reynolds stress. The modified Leonard stress is explicitly computed assuming a scale similarity, while the cross stress and the subgrid-scale Reynolds stress are modeled using the global-coefficient eddy-viscosity model. The model coefficient is determined by a dynamic procedure based on the global-equilibrium between the subgrid-scale dissipation and the viscous dissipation. The new model relieves some of the difficulties associated with an eddy-viscosity closure, such as the nonalignment of the principal axes of the subgrid-scale stress tensor and the strain rate tensor and the anisotropy of turbulent flow fields, while, like other dynamic global-coefficient models, it does not require averaging or clipping of the model coefficient for numerical stabilization. The combination of the global-coefficient eddy-viscosity model and a scale-similarity model is demonstrated to produce improved predictions in a number of turbulent flow simulations

  1. Multi-scale, multi-model assessment of projected land allocation

    Science.gov (United States)

    Vernon, C. R.; Huang, M.; Chen, M.; Calvin, K. V.; Le Page, Y.; Kraucunas, I.

    2017-12-01

    Effects of land use and land cover change (LULCC) on climate are generally classified into two scale-dependent processes: biophysical and biogeochemical. An extensive amount of research has been conducted related to the impact of each process under alternative climate change futures. However, these studies are generally focused on the impacts of a single process and fail to bridge the gap between sector-driven scale dependencies and any associated dynamics. Studies have been conducted to better understand the relationship of these processes but their respective scale has not adequately captured overall interdependencies between land surface changes and changes in other human-earth systems (e.g., energy, water, economic, etc.). There has also been considerable uncertainty surrounding land use land cover downscaling approaches due to scale dependencies. Demeter, a land use land cover downscaling and change detection model, was created to address this science gap. Demeter is an open-source model written in Python that downscales zonal land allocation projections to the gridded resolution of a user-selected spatial base layer (e.g., MODIS, NLCD, EIA CCI, etc.). Demeter was designed to be fully extensible to allow for module inheritance and replacement for custom research needs, such as flexible IO design to facilitate the coupling of Earth system models (e.g., the Accelerated Climate Modeling for Energy (ACME) and the Community Earth System Model (CESM)) to integrated assessment models (e.g., the Global Change Assessment Model (GCAM)). In this study, we first assessed the sensitivity of downscaled LULCC scenarios at multiple resolutions from Demeter to its parameters by comparing them to historical LULC change data. "Optimal" values of key parameters for each region were identified and used to downscale GCAM-based future scenarios consistent with those in the Land Use Model Intercomparison Project (LUMIP). Demeter-downscaled land use scenarios were then compared to the

  2. The Drell-Yan process in a non-scaling parton model

    International Nuclear Information System (INIS)

    Polkinghorne, J.C.

    1976-01-01

    The Drell-Yan process of heavy lepton pair production in hadronic collisions is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the Drell-Yan structure function exhibit a simple scale breaking behaviour closely related to the behaviour of moments of the deep inelastic structure function of the model. The extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. (Auth.)

  3. N=2→0 super no-scale models and moduli quantum stability

    Directory of Open Access Journals (Sweden)

    Costas Kounnas

    2017-06-01

    Full Text Available We consider a class of heterotic N=2→0 super no-scale Z2-orbifold models. An appropriate stringy Scherk–Schwarz supersymmetry breaking induces tree level masses to all massless bosons of the twisted hypermultiplets and therefore stabilizes all twisted moduli. At high supersymmetry breaking scale, the tachyons that occur in the N=4→0 parent theories are projected out, and no Hagedorn-like instability takes place in the N=2→0 models (for small enough marginal deformations. At low supersymmetry breaking scale, the stability of the untwisted moduli is studied at the quantum level by taking into account both untwisted and twisted contributions to the 1-loop effective potential. The latter depends on the specific branch of the gauge theory along which the background can be deformed. We derive its expression in terms of all classical marginal deformations in the pure Coulomb phase, and in some mixed Coulomb/Higgs phases. In this class of models, the super no-scale condition requires having at the massless level equal numbers of untwisted bosonic and twisted fermionic degrees of freedom. Finally, we show that N=1→0 super no-scale models are obtained by implementing a second Z2 orbifold twist on N=2→0 super no-scale Z2-orbifold models.

  4. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  5. An Efficient Two-Scale Hybrid Embedded Fracture Model for Shale Gas Simulation

    KAUST Repository

    Amir, Sahar Z.

    2016-12-27

    Natural and hydraulic fractures existence and state differs on a reservoir-by-reservoir or even on a well-by-well basis leading to the necessity of exploring the flow regimes variations with respect to the diverse fracture-network shapes forged. Conventional Dual-Porosity Dual-Permeability (DPDP) schemes are not adequate to model such complex fracture-network systems. To overcome this difficulty, in this paper, an iterative Hybrid Embedded multiscale (two-scale) Fracture model (HEF) is applied on a derived fit-for-purpose shale gas model. The HEF model involves splitting the fracture computations into two scales: 1) fine-scale solves for the flux exchange parameter within each grid cell; 2) coarse-scale solves for the pressure applied to the domain grid cells using the flux exchange parameter computed at each grid cell from the fine-scale. After that, the D dimensions matrix pressure and the (D-1) lower dimensional fracture pressure are solved as a system to apply the matrix-fracture coupling. HEF model combines the DPDP overlapping continua concept, the DFN lower dimensional fractures concept, the HFN hierarchical fracture concept, and the CCFD model simplicity. As for the fit-for-purpose shale gas model, various fit-for-purpose shale gas models can be derived using any set of selected properties plugged in one of the most popularly used proposed literature models as shown in the appendix. Also, this paper shows that shale extreme low permeability cause flow behavior to be dominated by the structure and magnitude of high permeability fractures.

  6. An Efficient Two-Scale Hybrid Embedded Fracture Model for Shale Gas Simulation

    KAUST Repository

    Amir, Sahar Z.; Sun, Shuyu

    2016-01-01

    Natural and hydraulic fractures existence and state differs on a reservoir-by-reservoir or even on a well-by-well basis leading to the necessity of exploring the flow regimes variations with respect to the diverse fracture-network shapes forged. Conventional Dual-Porosity Dual-Permeability (DPDP) schemes are not adequate to model such complex fracture-network systems. To overcome this difficulty, in this paper, an iterative Hybrid Embedded multiscale (two-scale) Fracture model (HEF) is applied on a derived fit-for-purpose shale gas model. The HEF model involves splitting the fracture computations into two scales: 1) fine-scale solves for the flux exchange parameter within each grid cell; 2) coarse-scale solves for the pressure applied to the domain grid cells using the flux exchange parameter computed at each grid cell from the fine-scale. After that, the D dimensions matrix pressure and the (D-1) lower dimensional fracture pressure are solved as a system to apply the matrix-fracture coupling. HEF model combines the DPDP overlapping continua concept, the DFN lower dimensional fractures concept, the HFN hierarchical fracture concept, and the CCFD model simplicity. As for the fit-for-purpose shale gas model, various fit-for-purpose shale gas models can be derived using any set of selected properties plugged in one of the most popularly used proposed literature models as shown in the appendix. Also, this paper shows that shale extreme low permeability cause flow behavior to be dominated by the structure and magnitude of high permeability fractures.

  7. A two-scale roughness model for the gloss of coated paper

    Science.gov (United States)

    Elton, N. J.

    2008-08-01

    A model for gloss is developed for surfaces with two-scale random roughness where one scale lies in the wavelength region (microroughness) and the other in the geometrical optics limit (macroroughness). A number of important industrial materials such as coated and printed paper and some paints exhibit such two-scale rough surfaces. Scalar Kirchhoff theory is used to describe scattering in the wavelength region and a facet model used for roughness features much greater than the wavelength. Simple analytical expressions are presented for the gloss of surfaces with Gaussian, modified and intermediate Lorentzian distributions of surface slopes, valid for gloss at high angle of incidence. In the model, gloss depends only on refractive index, rms microroughness amplitude and the FWHM of the surface slope distribution, all of which may be obtained experimentally. Model predictions are compared with experimental results for a range of coated papers and gloss standards, and found to be in fair agreement within model limitations.

  8. Mathematical models in marketing a collection of abstracts

    CERN Document Server

    Funke, Ursula H

    1976-01-01

    Mathematical models can be classified in a number of ways, e.g., static and dynamic; deterministic and stochastic; linear and nonlinear; individual and aggregate; descriptive, predictive, and normative; according to the mathematical technique applied or according to the problem area in which they are used. In marketing, the level of sophistication of the mathe­ matical models varies considerably, so that a nurnber of models will be meaningful to a marketing specialist without an extensive mathematical background. To make it easier for the nontechnical user we have chosen to classify the models included in this collection according to the major marketing problem areas in which they are applied. Since the emphasis lies on mathematical models, we shall not as a rule present statistical models, flow chart models, computer models, or the empirical testing aspects of these theories. We have also excluded competitive bidding, inventory and transportation models since these areas do not form the core of ·the market...

  9. A model for allometric scaling of mammalian metabolism with ambient heat loss

    KAUST Repository

    Kwak, Ho Sang

    2016-02-02

    Background Allometric scaling, which represents the dependence of biological trait or process relates on body size, is a long-standing subject in biological science. However, there has been no study to consider heat loss to the ambient and an insulation layer representing mammalian skin and fur for the derivation of the scaling law of metabolism. Methods A simple heat transfer model is proposed to analyze the allometry of mammalian metabolism. The present model extends existing studies by incorporating various external heat transfer parameters and additional insulation layers. The model equations were solved numerically and by an analytic heat balance approach. Results A general observation is that the present heat transfer model predicted the 2/3 surface scaling law, which is primarily attributed to the dependence of the surface area on the body mass. External heat transfer effects introduced deviations in the scaling law, mainly due to natural convection heat transfer which becomes more prominent at smaller mass. These deviations resulted in a slight modification of the scaling exponent to a value smaller than 2/3. Conclusion The finding that additional radiative heat loss and the consideration of an outer insulation fur layer attenuate these deviation effects and render the scaling law closer to 2/3 provides in silico evidence for a functional impact of heat transfer mode on the allometric scaling law in mammalian metabolism.

  10. On the relation between the interacting boson model of Arima and Iachello and the collective model of Bohr and Mottelson

    International Nuclear Information System (INIS)

    Assenbaum, H.J.; Weiguny, A.

    1982-01-01

    The generator coordinate method is used to relate the interacting boson model of Arima and Iachello and the collective model of Bohr and Mottelson through an isometric transformation. It associates complex parameters to the original boson operators whereas the ultimate collective variables are real. The absolute squares of the collective wave functions can be given a direct probability interpretation. The lowest order Bohr-Mottelson hamiltonian is obtained in the harmonic approximation to the interacting boson model; unharmonic coupling terms render the collective potential to be velocity-dependent. (orig.)

  11. Authentic scientific data collection in support of an integrative model-based class: A framework for student engagement in the classroom

    Science.gov (United States)

    Sorensen, A. E.; Dauer, J. M.; Corral, L.; Fontaine, J. J.

    2017-12-01

    A core component of public scientific literacy, and thereby informed decision-making, is the ability of individuals to reason about complex systems. In response to students having difficulty learning about complex systems, educational research suggests that conceptual representations, or mental models, may help orient student thinking. Mental models provide a framework to support students in organizing and developing ideas. The PMC-2E model is a productive tool in teaching ideas of modeling complex systems in the classroom because the conceptual representation framework allows for self-directed learning where students can externalize systems thinking. Beyond mental models, recent work emphasizes the importance of facilitating integration of authentic science into the formal classroom. To align these ideas, a university class was developed around the theme of carnivore ecology, founded on PMC-2E framework and authentic scientific data collection. Students were asked to develop a protocol, collect, and analyze data around a scientific question in partnership with a scientist, and then use data to inform their own learning about the system through the mental model process. We identified two beneficial outcomes (1) scientific data is collected to address real scientific questions at a larger scale and (2) positive outcomes for student learning and views of science. After participating in the class, students report enjoying class structure, increased support for public understanding of science, and shifts in nature of science and interest in pursuing science metrics on post-assessments. Further work is ongoing investigating the linkages between engaging in authentic scientific practices that inform student mental models, and how it might promote students' systems-thinking skills, implications for student views of nature of science, and development of student epistemic practices.

  12. Lepton Dipole Moments in Supersymmetric Low-Scale Seesaw Models

    CERN Document Server

    Ilakovac, Amon; Popov, Luka

    2014-01-01

    We study the anomalous magnetic and electric dipole moments of charged leptons in supersymmetric low-scale seesaw models with right-handed neutrino superfields. We consider a minimally extended framework of minimal supergravity, by assuming that CP violation originates from complex soft SUSY-breaking bilinear and trilinear couplings associated with the right-handed sneutrino sector. We present numerical estimates of the muon anomalous magnetic moment and the electron electric dipole moment (EDM), as functions of key model parameters, such as the Majorana mass scale mN and tan(\\beta). In particular, we find that the contributions of the singlet heavy neutrinos and sneutrinos to the electron EDM are naturally small in this model, of order 10^{-27} - 10^{-28} e cm, and can be probed in the present and future experiments.

  13. Advanced computational workflow for the multi-scale modeling of the bone metabolic processes.

    Science.gov (United States)

    Dao, Tien Tuan

    2017-06-01

    Multi-scale modeling of the musculoskeletal system plays an essential role in the deep understanding of complex mechanisms underlying the biological phenomena and processes such as bone metabolic processes. Current multi-scale models suffer from the isolation of sub-models at each anatomical scale. The objective of this present work was to develop a new fully integrated computational workflow for simulating bone metabolic processes at multi-scale levels. Organ-level model employs multi-body dynamics to estimate body boundary and loading conditions from body kinematics. Tissue-level model uses finite element method to estimate the tissue deformation and mechanical loading under body loading conditions. Finally, cell-level model includes bone remodeling mechanism through an agent-based simulation under tissue loading. A case study on the bone remodeling process located on the human jaw was performed and presented. The developed multi-scale model of the human jaw was validated using the literature-based data at each anatomical level. Simulation outcomes fall within the literature-based ranges of values for estimated muscle force, tissue loading and cell dynamics during bone remodeling process. This study opens perspectives for accurately simulating bone metabolic processes using a fully integrated computational workflow leading to a better understanding of the musculoskeletal system function from multiple length scales as well as to provide new informative data for clinical decision support and industrial applications.

  14. A hysteretic model considering Stribeck effect for small-scale magnetorheological damper

    Science.gov (United States)

    Zhao, Yu-Liang; Xu, Zhao-Dong

    2018-06-01

    Magnetorheological (MR) damper is an ideal semi-active control device for vibration suppression. The mechanical properties of this type of devices show strong nonlinear characteristics, especially the performance of the small-scale dampers. Therefore, developing an ideal model that can accurately describe the nonlinearity of such device is crucial to control design. In this paper, the dynamic characteristics of a small-scale MR damper developed by our research group is tested, and the Stribeck effect is observed in the low velocity region. Then, an improved model based on sigmoid model is proposed to describe this Stribeck effect observed in the experiment. After that, the parameters of this model are identified by genetic algorithms, and the mathematical relationship between these parameters and the input current, excitation frequency and amplitude is regressed. Finally, the predicted forces of the proposed model are validated with the experimental data. The results show that this model can well predict the mechanical properties of the small-scale damper, especially the Stribeck effect in the low velocity region.

  15. Direct Scaling of Leaf-Resolving Biophysical Models from Leaves to Canopies

    Science.gov (United States)

    Bailey, B.; Mahaffee, W.; Hernandez Ochoa, M.

    2017-12-01

    Recent advances in the development of biophysical models and high-performance computing have enabled rapid increases in the level of detail that can be represented by simulations of plant systems. However, increasingly detailed models typically require increasingly detailed inputs, which can be a challenge to accurately specify. In this work, we explore the use of terrestrial LiDAR scanning data to accurately specify geometric inputs for high-resolution biophysical models that enables direct up-scaling of leaf-level biophysical processes. Terrestrial LiDAR scans generate "clouds" of millions of points that map out the geometric structure of the area of interest. However, points alone are often not particularly useful in generating geometric model inputs, as additional data processing techniques are required to provide necessary information regarding vegetation structure. A new method was developed that directly reconstructs as many leaves as possible that are in view of the LiDAR instrument, and uses a statistical backfilling technique to ensure that the overall leaf area and orientation distribution matches that of the actual vegetation being measured. This detailed structural data is used to provide inputs for leaf-resolving models of radiation, microclimate, evapotranspiration, and photosynthesis. Model complexity is afforded by utilizing graphics processing units (GPUs), which allows for simulations that resolve scales ranging from leaves to canopies. The model system was used to explore how heterogeneity in canopy architecture at various scales affects scaling of biophysical processes from leaves to canopies.

  16. The Design of a Fire Source in Scale-Model Experiments with Smoke Ventilation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Brohus, Henrik; la Cour-Harbo, H.

    2004-01-01

    The paper describes the design of a fire and a smoke source for scale-model experiments with smoke ventilation. It is only possible to work with scale-model experiments where the Reynolds number is reduced compared to full scale, and it is demonstrated that special attention to the fire source...... (heat and smoke source) may improve the possibility of obtaining Reynolds number independent solutions with a fully developed flow. The paper shows scale-model experiments for the Ofenegg tunnel case. Design of a fire source for experiments with smoke ventilation in a large room and smoke movement...

  17. Backtracking search algorithm in CVRP models for efficient solid waste collection and route optimization.

    Science.gov (United States)

    Akhtar, Mahmuda; Hannan, M A; Begum, R A; Basri, Hassan; Scavino, Edgar

    2017-03-01

    Waste collection is an important part of waste management that involves different issues, including environmental, economic, and social, among others. Waste collection optimization can reduce the waste collection budget and environmental emissions by reducing the collection route distance. This paper presents a modified Backtracking Search Algorithm (BSA) in capacitated vehicle routing problem (CVRP) models with the smart bin concept to find the best optimized waste collection route solutions. The objective function minimizes the sum of the waste collection route distances. The study introduces the concept of the threshold waste level (TWL) of waste bins to reduce the number of bins to be emptied by finding an optimal range, thus minimizing the distance. A scheduling model is also introduced to compare the feasibility of the proposed model with that of the conventional collection system in terms of travel distance, collected waste, fuel consumption, fuel cost, efficiency and CO 2 emission. The optimal TWL was found to be between 70% and 75% of the fill level of waste collection nodes and had the maximum tightness value for different problem cases. The obtained results for four days show a 36.80% distance reduction for 91.40% of the total waste collection, which eventually increases the average waste collection efficiency by 36.78% and reduces the fuel consumption, fuel cost and CO 2 emission by 50%, 47.77% and 44.68%, respectively. Thus, the proposed optimization model can be considered a viable tool for optimizing waste collection routes to reduce economic costs and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Evaluation of scale effects on hydraulic characteristics of fractured rock using fracture network model

    International Nuclear Information System (INIS)

    Ijiri, Yuji; Sawada, Atsushi; Uchida, Masahiro; Ishiguro, Katsuhiko; Umeki, Hiroyuki; Sakamoto, Kazuhiko; Ohnishi, Yuzo

    2001-01-01

    It is important to take into account scale effects on fracture geometry if the modeling scale is much larger than the in-situ observation scale. The scale effect on fracture trace length, which is the most scale dependent parameter, is investigated using fracture maps obtained at various scales in tunnel and dam sites. We found that the distribution of fracture trace length follows negative power law distribution in regardless of locations and rock types. The hydraulic characteristics of fractured rock is also investigated by numerical analysis of discrete fracture network (DFN) model where power law distribution of fracture radius is adopted. We found that as the exponent of power law distribution become larger, the hydraulic conductivity of DFN model increases and the travel time in DFN model decreases. (author)

  19. Modelling cloud effects on ozone on a regional scale : A case study

    NARCIS (Netherlands)

    Matthijsen, J.; Builtjes, P.J.H.; Meijer, E.W.; Boersen, G.

    1997-01-01

    We have investigated the influence of clouds on ozone on a regional scale (Europe) with a regional scale photochemical dispersion model (LOTOS). The LOTOS-model calculates ozone and other photo-oxidant concentrations in the lowest three km of the troposphere, using actual meteorologic data and

  20. Characterization of natural ventilation in wastewater collection systems.

    Science.gov (United States)

    Ward, Matthew; Corsi, Richard; Morton, Robert; Knapp, Tom; Apgar, Dirk; Quigley, Chris; Easter, Chris; Witherspoon, Jay; Pramanik, Amit; Parker, Wayne

    2011-03-01

    The purpose of the study was to characterize natural ventilation in full-scale gravity collection system components while measuring other parameters related to ventilation. Experiments were completed at four different locations in the wastewater collection systems of Los Angeles County Sanitation Districts, Los Angeles, California, and the King County Wastewater Treatment District, Seattle, Washington. The subject components were concrete gravity pipes ranging in diameter from 0.8 to 2.4 m (33 to 96 in.). Air velocity was measured in each pipe using a carbon-monoxide pulse tracer method. Air velocity was measured entering or exiting the components at vents using a standpipe and hotwire anemometer arrangement. Ambient wind speed, temperature, and relative humidity; headspace temperature and relative humidity; and wastewater flow and temperature were measured. The field experiments resulted in a large database of measured ventilation and related parameters characterizing ventilation in full-scale gravity sewers. Measured ventilation rates ranged from 23 to 840 L/s. The experimental data was used to evaluate existing ventilation models. Three models that were based upon empirical extrapolation, computational fluid dynamics, and thermodynamics, respectively, were evaluated based on predictive accuracy compared to the measured data. Strengths and weaknesses in each model were found and these observations were used to propose a concept for an improved ventilation model.

  1. A Coupled GCM-Cloud Resolving Modeling System, and a Regional Scale Model to Study Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo

    2007-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a superparameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (2ICE, several 31CE), Goddard radiation (including explicitly calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generatio11 regional scale model, WRF. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).

  2. Design, construction, and evaluation of a 1:8 scale model binaural manikin.

    Science.gov (United States)

    Robinson, Philip; Xiang, Ning

    2013-03-01

    Many experiments in architectural acoustics require presenting listeners with simulations of different rooms to compare. Acoustic scale modeling is a feasible means to create accurate simulations of many rooms at reasonable cost. A critical component in a scale model room simulation is a receiver that properly emulates a human receiver. For this purpose, a scale model artificial head has been constructed and tested. This paper presents the design and construction methods used, proper equalization procedures, and measurements of its response. A headphone listening experiment examining sound externalization with various reflection conditions is presented that demonstrates its use for psycho-acoustic testing.

  3. Scaling of Core Material in Rubble Mound Breakwater Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Liu, Z.; Troch, P.

    1999-01-01

    The permeability of the core material influences armour stability, wave run-up and wave overtopping. The main problem related to the scaling of core materials in models is that the hydraulic gradient and the pore velocity are varying in space and time. This makes it impossible to arrive at a fully...... correct scaling. The paper presents an empirical formula for the estimation of the wave induced pressure gradient in the core, based on measurements in models and a prototype. The formula, together with the Forchheimer equation can be used for the estimation of pore velocities in cores. The paper proposes...... that the diameter of the core material in models is chosen in such a way that the Froude scale law holds for a characteristic pore velocity. The characteristic pore velocity is chosen as the average velocity of a most critical area in the core with respect to porous flow. Finally the method is demonstrated...

  4. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  5. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  6. Modelling of evapotranspiration at field and landscape scales. Abstract

    DEFF Research Database (Denmark)

    Overgaard, Jesper; Butts, M.B.; Rosbjerg, Dan

    2002-01-01

    observations from a nearby weather station. Detailed land-use and soil maps were used to set up the model. Leaf area index was derived from NDVI (Normalized Difference Vegetation Index) images. To validate the model at field scale the simulated evapotranspiration rates were compared to eddy...

  7. HOCOMOCO: towards a complete collection of transcription factor binding models for human and mouse via large-scale ChIP-Seq analysis

    KAUST Repository

    Kulakovskiy, Ivan V.; Vorontsov, Ilya E.; Yevshin, Ivan S.; Sharipov, Ruslan N.; Fedorova, Alla D.; Rumynskiy, Eugene I.; Medvedeva, Yulia A.; Magana-Mora, Arturo; Bajic, Vladimir B.; Papatsenko, Dmitry A.; Kolpakov, Fedor A.; Makeev, Vsevolod J.

    2017-01-01

    We present a major update of the HOCOMOCO collection that consists of patterns describing DNA binding specificities for human and mouse transcription factors. In this release, we profited from a nearly doubled volume of published in vivo experiments on transcription factor (TF) binding to expand the repertoire of binding models, replace low-quality models previously based on in vitro data only and cover more than a hundred TFs with previously unknown binding specificities. This was achieved by systematic motif discovery from more than five thousand ChIP-Seq experiments uniformly processed within the BioUML framework with several ChIP-Seq peak calling tools and aggregated in the GTRD database. HOCOMOCO v11 contains binding models for 453 mouse and 680 human transcription factors and includes 1302 mononucleotide and 576 dinucleotide position weight matrices, which describe primary binding preferences of each transcription factor and reliable alternative binding specificities. An interactive interface and bulk downloads are available on the web: http://hocomoco.autosome.ru and http://www.cbrc.kaust.edu.sa/hocomoco11. In this release, we complement HOCOMOCO by MoLoTool (Motif Location Toolbox, http://molotool.autosome.ru) that applies HOCOMOCO models for visualization of binding sites in short DNA sequences.

  8. HOCOMOCO: towards a complete collection of transcription factor binding models for human and mouse via large-scale ChIP-Seq analysis

    KAUST Repository

    Kulakovskiy, Ivan V.

    2017-10-31

    We present a major update of the HOCOMOCO collection that consists of patterns describing DNA binding specificities for human and mouse transcription factors. In this release, we profited from a nearly doubled volume of published in vivo experiments on transcription factor (TF) binding to expand the repertoire of binding models, replace low-quality models previously based on in vitro data only and cover more than a hundred TFs with previously unknown binding specificities. This was achieved by systematic motif discovery from more than five thousand ChIP-Seq experiments uniformly processed within the BioUML framework with several ChIP-Seq peak calling tools and aggregated in the GTRD database. HOCOMOCO v11 contains binding models for 453 mouse and 680 human transcription factors and includes 1302 mononucleotide and 576 dinucleotide position weight matrices, which describe primary binding preferences of each transcription factor and reliable alternative binding specificities. An interactive interface and bulk downloads are available on the web: http://hocomoco.autosome.ru and http://www.cbrc.kaust.edu.sa/hocomoco11. In this release, we complement HOCOMOCO by MoLoTool (Motif Location Toolbox, http://molotool.autosome.ru) that applies HOCOMOCO models for visualization of binding sites in short DNA sequences.

  9. On Spatial Resolution in Habitat Models: Can Small-scale Forest Structure Explain Capercaillie Numbers?

    Directory of Open Access Journals (Sweden)

    Ilse Storch

    2002-06-01

    Full Text Available This paper explores the effects of spatial resolution on the performance and applicability of habitat models in wildlife management and conservation. A Habitat Suitability Index (HSI model for the Capercaillie (Tetrao urogallus in the Bavarian Alps, Germany, is presented. The model was exclusively built on non-spatial, small-scale variables of forest structure and without any consideration of landscape patterns. The main goal was to assess whether a HSI model developed from small-scale habitat preferences can explain differences in population abundance at larger scales. To validate the model, habitat variables and indirect sign of Capercaillie use (such as feathers or feces were mapped in six study areas based on a total of 2901 20 m radius (for habitat variables and 5 m radius sample plots (for Capercaillie sign. First, the model's representation of Capercaillie habitat preferences was assessed. Habitat selection, as expressed by Ivlev's electivity index, was closely related to HSI scores, increased from poor to excellent habitat suitability, and was consistent across all study areas. Then, habitat use was related to HSI scores at different spatial scales. Capercaillie use was best predicted from HSI scores at the small scale. Lowering the spatial resolution of the model stepwise to 36-ha, 100-ha, 400-ha, and 2000-ha areas and relating Capercaillie use to aggregated HSI scores resulted in a deterioration of fit at larger scales. Most importantly, there were pronounced differences in Capercaillie abundance at the scale of study areas, which could not be explained by the HSI model. The results illustrate that even if a habitat model correctly reflects a species' smaller scale habitat preferences, its potential to predict population abundance at larger scales may remain limited.

  10. Model-driven approach to data collection and reporting for quality improvement.

    Science.gov (United States)

    Curcin, Vasa; Woodcock, Thomas; Poots, Alan J; Majeed, Azeem; Bell, Derek

    2014-12-01

    Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Nudging technique for scale bridging in air quality/climate atmospheric composition modelling

    Directory of Open Access Journals (Sweden)

    A. Maurizi

    2012-04-01

    Full Text Available The interaction between air quality and climate involves dynamical scales that cover a very wide range. Bridging these scales in numerical simulations is fundamental in studies devoted to megacity/hot-spot impacts on larger scales. A technique based on nudging is proposed as a bridging method that can couple different models at different scales.

    Here, nudging is used to force low resolution chemical composition models with a run of a high resolution model on a critical area. A one-year numerical experiment focused on the Po Valley hot spot is performed using the BOLCHEM model to asses the method.

    The results show that the model response is stable to perturbation induced by the nudging and that, taking the high resolution run as a reference, performances of the nudged run increase with respect to the non-forced run. The effect outside the forcing area depends on transport and is significant in a relevant number of events although it becomes weak on seasonal or yearly basis.

  12. Aerosol numerical modelling at local scale

    International Nuclear Information System (INIS)

    Albriet, Bastien

    2007-01-01

    At local scale and in urban areas, an important part of particulate pollution is due to traffic. It contributes largely to the high number concentrations observed. Two aerosol sources are mainly linked to traffic. Primary emission of soot particles and secondary nanoparticle formation by nucleation. The emissions and mechanisms leading to the formation of such bimodal distribution are still badly understood nowadays. In this thesis, we try to provide an answer to this problematic by numerical modelling. The Modal Aerosol Model MAM is used, coupled with two 3D-codes: a CFD (Mercure Saturne) and a CTM (Polair3D). A sensitivity analysis is performed, at the border of a road but also in the first meters of an exhaust plume, to identify the role of each process involved and the sensitivity of different parameters used in the modelling. (author) [fr

  13. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    International Nuclear Information System (INIS)

    Geier, J.E.

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs

  14. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J.E. [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs.

  15. Scale Effects Related to Small Physical Modelling of Overtopping of Rubble Mound Breakwaters

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Andersen, Thomas Lykke

    2009-01-01

    By comparison of overtopping discharges recorded in prototype and small scale physical models it was demonstrated in the EU-CLASH project that small scale tests significantly underestimate smaller discharges. Deviations in overtopping are due to model and scale effects. These effects are discusse...... armour on the upper part of the slope. This effect is believed to be the main reason for the found deviations between overtopping in prototype and small scale tests....

  16. The sympletic collective model and its submodels

    International Nuclear Information System (INIS)

    Santos Avancini, S. dos.

    1986-01-01

    A review the sympletic collective model (SCM), emphasizing the mathematical and physical content of the model is done. Since the SCM is not computationally viable, a detailed discussion of the properties and relationships of the SCM submodels both, in a spherical and in a deformed harmonic oscillator basis is presented. It is shown that the deformed basis is an optimal one, from an analysis of the variational models, variation before projection (VBP) and variation after projection (VAP). To demonstrate that a calculation in the deformed basis is feasible, the submodel Sp paral. (1,R) x Sp perpend. (1,R) to calculate matrix elements of the operators of physical interest in 8 Be is considered. The Sp (1,R) x Sp 1 (1,R) is the simplest submodel which contains the states of VBP and VAP. (author) [pt

  17. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  18. Modeling and Validation across Scales: Parametrizing the effect of the forested landscape

    DEFF Research Database (Denmark)

    Dellwik, Ebba; Badger, Merete; Angelou, Nikolas

    be transferred into a parametrization of forests in wind models. The presentation covers three scales: the single tree, the forest edges and clearings, and the large-scale forested landscape in which the forest effects are parameterized with a roughness length. Flow modeling results and validation against...

  19. Levels of Organisation in agent-based modelling for renewable resources management. Agricultural water management collective rules enforcement in the French Drome River Valley Case Study

    International Nuclear Information System (INIS)

    Abrami, G.

    2004-11-01

    Levels of Organisation in agent-based modelling for renewable resources management. Agricultural water management collective rules enforcement in the French Dr me River Valley Case Study. In the context of Agent-Based Modelling for participative renewable resources management, this thesis is concerned with representing multiple tangled levels of organisation of a system. The Agent-Group-Role (AGR) formalism is borrowed from computer science research. It has been conceptually specified to handle levels of organisation, and behaviours within levels of organisation. A design methodology dedicated to AGR modelling has been developed, together with an implementation of the formalism over a multi-agent platform. AGR models of agricultural water management in the French Dr me River Valley have been built and tested. This experiment demonstrates the AGR formalism ability to (1) clarify usually implicit hypothesis on action modes, scales or viewpoints (2) facilitate the definition of scenarios with various collective rules, and various rules in enforcement behaviours (3) generate bricks for generic irrigated catchment models. (author)

  20. Development and field validation of a regional, management-scale habitat model: A koala Phascolarctos cinereus case study.

    Science.gov (United States)

    Law, Bradley; Caccamo, Gabriele; Roe, Paul; Truskinger, Anthony; Brassil, Traecey; Gonsalves, Leroy; McConville, Anna; Stanton, Matthew

    2017-09-01

    Species distribution models have great potential to efficiently guide management for threatened species, especially for those that are rare or cryptic. We used MaxEnt to develop a regional-scale model for the koala Phascolarctos cinereus at a resolution (250 m) that could be used to guide management. To ensure the model was fit for purpose, we placed emphasis on validating the model using independently-collected field data. We reduced substantial spatial clustering of records in coastal urban areas using a 2-km spatial filter and by modeling separately two subregions separated by the 500-m elevational contour. A bias file was prepared that accounted for variable survey effort. Frequency of wildfire, soil type, floristics and elevation had the highest relative contribution to the model, while a number of other variables made minor contributions. The model was effective in discriminating different habitat suitability classes when compared with koala records not used in modeling. We validated the MaxEnt model at 65 ground-truth sites using independent data on koala occupancy (acoustic sampling) and habitat quality (browse tree availability). Koala bellows ( n  = 276) were analyzed in an occupancy modeling framework, while site habitat quality was indexed based on browse trees. Field validation demonstrated a linear increase in koala occupancy with higher modeled habitat suitability at ground-truth sites. Similarly, a site habitat quality index at ground-truth sites was correlated positively with modeled habitat suitability. The MaxEnt model provided a better fit to estimated koala occupancy than the site-based habitat quality index, probably because many variables were considered simultaneously by the model rather than just browse species. The positive relationship of the model with both site occupancy and habitat quality indicates that the model is fit for application at relevant management scales. Field-validated models of similar resolution would assist in

  1. Active Learning of Classification Models with Likert-Scale Feedback.

    Science.gov (United States)

    Xue, Yanbing; Hauskrecht, Milos

    2017-01-01

    Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.

  2. Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales

    International Nuclear Information System (INIS)

    Krstic, Predrag S.

    2014-01-01

    Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.

  3. Regionalization of meso-scale physically based nitrogen modeling outputs to the macro-scale by the use of regression trees

    Science.gov (United States)

    Künne, A.; Fink, M.; Kipka, H.; Krause, P.; Flügel, W.-A.

    2012-06-01

    In this paper, a method is presented to estimate excess nitrogen on large scales considering single field processes. The approach was implemented by using the physically based model J2000-S to simulate the nitrogen balance as well as the hydrological dynamics within meso-scale test catchments. The model input data, the parameterization, the results and a detailed system understanding were used to generate the regression tree models with GUIDE (Loh, 2002). For each landscape type in the federal state of Thuringia a regression tree was calibrated and validated using the model data and results of excess nitrogen from the test catchments. Hydrological parameters such as precipitation and evapotranspiration were also used to predict excess nitrogen by the regression tree model. Hence they had to be calculated and regionalized as well for the state of Thuringia. Here the model J2000g was used to simulate the water balance on the macro scale. With the regression trees the excess nitrogen was regionalized for each landscape type of Thuringia. The approach allows calculating the potential nitrogen input into the streams of the drainage area. The results show that the applied methodology was able to transfer the detailed model results of the meso-scale catchments to the entire state of Thuringia by low computing time without losing the detailed knowledge from the nitrogen transport modeling. This was validated with modeling results from Fink (2004) in a catchment lying in the regionalization area. The regionalized and modeled excess nitrogen correspond with 94%. The study was conducted within the framework of a project in collaboration with the Thuringian Environmental Ministry, whose overall aim was to assess the effect of agro-environmental measures regarding load reduction in the water bodies of Thuringia to fulfill the requirements of the European Water Framework Directive (Bäse et al., 2007; Fink, 2006; Fink et al., 2007).

  4. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    Unknown

    Multi-scale models; quasicontinuum method; finite elements. 1. Introduction ... boundary with external stresses, and the interaction of a lattice dislocation with a grain ..... mum value of se over the elements that touch node α. The acceleration of ...

  5. Collective effects in microscopic transport models

    International Nuclear Information System (INIS)

    Greiner, Carsten

    2003-01-01

    We give a reminder on the major inputs of microscopic hadronic transport models and on the physics aims when describing various aspects of relativistic heavy ion collisions at SPS energies. We then first stress that the situation of particle ratios being reproduced by a statistical description does not necessarily mean a clear hint for the existence of a fully isotropic momentum distribution at hydrochemical freeze-out. Second, a short discussion on the status of strangeness production is given. Third we demonstrate the importance of a new collective mechanism for producing (strange) antibaryons within a hardonic description, which guarantees sufficiently fast chemical equilibration

  6. Device Scale Modeling of Solvent Absorption using MFIX-TFM

    Energy Technology Data Exchange (ETDEWEB)

    Carney, Janine E. [National Energy Technology Lab. (NETL), Albany, OR (United States); Finn, Justin R. [National Energy Technology Lab. (NETL), Albany, OR (United States); Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States)

    2016-10-01

    Recent climate change is largely attributed to greenhouse gases (e.g., carbon dioxide, methane) and fossil fuels account for a large majority of global CO2 emissions. That said, fossil fuels will continue to play a significant role in the generation of power for the foreseeable future. The extent to which CO2 is emitted needs to be reduced, however, carbon capture and sequestration are also necessary actions to tackle climate change. Different approaches exist for CO2 capture including both post-combustion and pre-combustion technologies, oxy-fuel combustion and/or chemical looping combustion. The focus of this effort is on post-combustion solvent-absorption technology. To apply CO2 technologies at commercial scale, the availability and maturity and the potential for scalability of that technology need to be considered. Solvent absorption is a proven technology but not at the scale needed by typical power plant. The scale up and down and design of laboratory and commercial packed bed reactors depends heavily on the specific knowledge of two-phase pressure drop, liquid holdup, the wetting efficiency and mass transfer efficiency as a function of operating conditions. Simple scaling rules often fail to provide proper design. Conventional reactor design modeling approaches will generally characterize complex non-ideal flow and mixing patterns using simplified and/or mechanistic flow assumptions. While there are varying levels of complexity used within these approaches, none of these models resolve the local velocity fields. Consequently, they are unable to account for important design factors such as flow maldistribution and channeling from a fundamental perspective. Ideally design would be aided by development of predictive models based on truer representation of the physical and chemical processes that occur at different scales. Computational fluid dynamic (CFD) models are based on multidimensional flow equations with first

  7. On the scale similarity in large eddy simulation. A proposal of a new model

    International Nuclear Information System (INIS)

    Pasero, E.; Cannata, G.; Gallerano, F.

    2004-01-01

    Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The

  8. Low-frequency scaling applied to stochastic finite-fault modeling

    Science.gov (United States)

    Crane, Stephen; Motazedian, Dariush

    2014-01-01

    Stochastic finite-fault modeling is an important tool for simulating moderate to large earthquakes. It has proven to be useful in applications that require a reliable estimation of ground motions, mostly in the spectral frequency range of 1 to 10 Hz, which is the range of most interest to engineers. However, since there can be little resemblance between the low-frequency spectra of large and small earthquakes, this portion can be difficult to simulate using stochastic finite-fault techniques. This paper introduces two different methods to scale low-frequency spectra for stochastic finite-fault modeling. One method multiplies the subfault source spectrum by an empirical function. This function has three parameters to scale the low-frequency spectra: the level of scaling and the start and end frequencies of the taper. This empirical function adjusts the earthquake spectra only between the desired frequencies, conserving seismic moment in the simulated spectra. The other method is an empirical low-frequency coefficient that is added to the subfault corner frequency. This new parameter changes the ratio between high and low frequencies. For each simulation, the entire earthquake spectra is adjusted, which may result in the seismic moment not being conserved for a simulated earthquake. These low-frequency scaling methods were used to reproduce recorded earthquake spectra from several earthquakes recorded in the Pacific Earthquake Engineering Research Center (PEER) Next Generation Attenuation Models (NGA) database. There were two methods of determining the stochastic parameters of best fit for each earthquake: a general residual analysis and an earthquake-specific residual analysis. Both methods resulted in comparable values for stress drop and the low-frequency scaling parameters; however, the earthquake-specific residual analysis obtained a more accurate distribution of the averaged residuals.

  9. Updating of a dynamic finite element model from the Hualien scale model reactor building

    International Nuclear Information System (INIS)

    Billet, L.; Moine, P.; Lebailly, P.

    1996-08-01

    The forces occurring at the soil-structure interface of a building have generally a large influence on the way the building reacts to an earthquake. One can be tempted to characterise these forces more accurately bu updating a model from the structure. However, this procedure requires an updating method suitable for dissipative models, since significant damping can be observed at the soil-structure interface of buildings. Such a method is presented here. It is based on the minimization of a mechanical energy built from the difference between Eigen data calculated bu the model and Eigen data issued from experimental tests on the real structure. An experimental validation of this method is then proposed on a model from the HUALIEN scale-model reactor building. This scale-model, built on the HUALIEN site of TAIWAN, is devoted to the study of soil-structure interaction. The updating concerned the soil impedances, modelled by a layer of springs and viscous dampers attached to the building foundation. A good agreement was found between the Eigen modes and dynamic responses calculated bu the updated model and the corresponding experimental data. (authors). 12 refs., 3 figs., 4 tabs

  10. Light moduli in almost no-scale models

    International Nuclear Information System (INIS)

    Buchmueller, Wilfried; Moeller, Jan; Schmidt, Jonas

    2009-09-01

    We discuss the stabilization of the compact dimension for a class of five-dimensional orbifold supergravity models. Supersymmetry is broken by the superpotential on a boundary. Classically, the size L of the fifth dimension is undetermined, with or without supersymmetry breaking, and the effective potential is of no-scale type. The size L is fixed by quantum corrections to the Kaehler potential, the Casimir energy and Fayet-Iliopoulos (FI) terms localized at the boundaries. For an FI scale of order M GUT , as in heterotic string compactifications with anomalous U(1) symmetries, one obtains L∝1/M GUT . A small mass is predicted for the scalar fluctuation associated with the fifth dimension, m ρ 3/2 /(L M). (orig.)

  11. Collective signaling behavior in a networked-oscillator model

    Science.gov (United States)

    Liu, Z.-H.; Hui, P. M.

    2007-09-01

    We propose and study the collective behavior of a model of networked signaling objects that incorporates several ingredients of real-life systems. These ingredients include spatial inhomogeneity with grouping of signaling objects, signal attenuation with distance, and delayed and impulsive coupling between non-identical signaling objects. Depending on the coupling strength and/or time-delay effect, the model exhibits completely, partially, and locally collective signaling behavior. In particular, a correlated signaling (CS) behavior is observed in which there exist time durations when nearly a constant fraction of oscillators in the system are in the signaling state. These time durations are much longer than the duration of a spike when a single oscillator signals, and they are separated by regular intervals in which nearly all oscillators are silent. Such CS behavior is similar to that observed in biological systems such as fireflies, cicadas, crickets, and frogs. The robustness of the CS behavior against noise is also studied. It is found that properly adjusting the coupling strength and noise level could enhance the correlated behavior.

  12. Groundwater flow analysis on local scale. Setting boundary conditions for groundwater flow analysis on site scale model in step 1

    International Nuclear Information System (INIS)

    Ohyama, Takuya; Saegusa, Hiromitsu; Onoe, Hironori

    2005-05-01

    Japan Nuclear Cycle Development Institute has been conducting a wide range of geoscientific research in order to build a foundation for multidisciplinary studies of the deep geological environment as a basis of research and development for geological disposal of nuclear wastes. Ongoing geoscientific research programs include the Regional Hydrogeological Study (RHS) project and Mizunami Underground Research Laboratory (MIU) project in the Tono region, Gifu Prefecture. The main goal of these projects is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment at several spatial scales. The RHS project is a local scale study for understanding the groundwater flow system from the recharge area to the discharge area. The surface-based Investigation Phase of the MIU project is a site scale study for understanding the groundwater flow system immediately surrounding the MIU construction site. The MIU project is being conducted using a multiphase, iterative approach. In this study, the hydrogeological modeling and groundwater flow analysis of the local scale were carried out in order to set boundary conditions of the site scale model based on the data obtained from surface-based investigations in Step 1 in site scale of the MIU project. As a result of the study, head distribution to set boundary conditions for groundwater flow analysis on the site scale model could be obtained. (author)

  13. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  14. Genome-scale modeling of yeast: chronology, applications and critical perspectives.

    Science.gov (United States)

    Lopes, Helder; Rocha, Isabel

    2017-08-01

    Over the last 15 years, several genome-scale metabolic models (GSMMs) were developed for different yeast species, aiding both the elucidation of new biological processes and the shift toward a bio-based economy, through the design of in silico inspired cell factories. Here, an historical perspective of the GSMMs built over time for several yeast species is presented and the main inheritance patterns among the metabolic reconstructions are highlighted. We additionally provide a critical perspective on the overall genome-scale modeling procedure, underlining incomplete model validation and evaluation approaches and the quest for the integration of regulatory and kinetic information into yeast GSMMs. A summary of experimentally validated model-based metabolic engineering applications of yeast species is further emphasized, while the main challenges and future perspectives for the field are finally addressed. © FEMS 2017.

  15. Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity

    Science.gov (United States)

    Luce, C. H.; Lute, A.

    2017-12-01

    Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations

  16. ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.

    Science.gov (United States)

    Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng

    2017-08-30

    While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.

  17. Training Systems Modelers through the Development of a Multi-scale Chagas Disease Risk Model

    Science.gov (United States)

    Hanley, J.; Stevens-Goodnight, S.; Kulkarni, S.; Bustamante, D.; Fytilis, N.; Goff, P.; Monroy, C.; Morrissey, L. A.; Orantes, L.; Stevens, L.; Dorn, P.; Lucero, D.; Rios, J.; Rizzo, D. M.

    2012-12-01

    The goal of our NSF-sponsored Division of Behavioral and Cognitive Sciences grant is to create a multidisciplinary approach to develop spatially explicit models of vector-borne disease risk using Chagas disease as our model. Chagas disease is a parasitic disease endemic to Latin America that afflicts an estimated 10 million people. The causative agent (Trypanosoma cruzi) is most commonly transmitted to humans by blood feeding triatomine insect vectors. Our objectives are: (1) advance knowledge on the multiple interacting factors affecting the transmission of Chagas disease, and (2) provide next generation genomic and spatial analysis tools applicable to the study of other vector-borne diseases worldwide. This funding is a collaborative effort between the RSENR (UVM), the School of Engineering (UVM), the Department of Biology (UVM), the Department of Biological Sciences (Loyola (New Orleans)) and the Laboratory of Applied Entomology and Parasitology (Universidad de San Carlos). Throughout this five-year study, multi-educational groups (i.e., high school, undergraduate, graduate, and postdoctoral) will be trained in systems modeling. This systems approach challenges students to incorporate environmental, social, and economic as well as technical aspects and enables modelers to simulate and visualize topics that would either be too expensive, complex or difficult to study directly (Yasar and Landau 2003). We launch this research by developing a set of multi-scale, epidemiological models of Chagas disease risk using STELLA® software v.9.1.3 (isee systems, inc., Lebanon, NH). We use this particular system dynamics software as a starting point because of its simple graphical user interface (e.g., behavior-over-time graphs, stock/flow diagrams, and causal loops). To date, high school and undergraduate students have created a set of multi-scale (i.e., homestead, village, and regional) disease models. Modeling the system at multiple spatial scales forces recognition that

  18. Ozone Flux Measurement and Modelling on Leaf/Shoot and Canopy Scale

    Directory of Open Access Journals (Sweden)

    Ludger Grünhage

    Full Text Available The quantitative study of the ozone effects on agricultural and forest vegetation requires the knowledge of the pollutant dose absorbed by plants via leaf stomata, i.e. the stomatal flux. Nevertheless, the toxicologically effective dose can differ from the stomatal flux because a pool of scavenging and detoxification processes reduce the amount of pollutant responsible of the expression of the harmful effects. The measurement of the stomatal flux is not immediate and the quantification of the effective dose is still troublesome. The paper examines the conceptual aspects of ozone flux measurement and modelling in agricultural and ecological research. The ozone flux paradigm is conceptualized into a toxicological frame and faced at two different scales: leaf/shoot and canopy scales. Leaf and shoot scale flux measurements require gas-exchange enclosure techniques, while canopy scale flux measurements need a micrometeorological approach including techniques such as eddy covariance and the aerodynamical gradient. At both scales, not all the measured ozone flux is stomatal flux. In fact, a not negligible amount of ozone is destroyed on external plant surfaces, like leaf cuticles, or by gas phase reaction with biogenic volatile compounds. The stomatal portion of flux can be calculated from concurrent measurements of water vapour fluxes at both scales. Canopy level flux measurements require very fast sensors and the fulfilment of many conditions to ensure that the measurements made above the canopy really reflect the canopy fluxes (constant flux hypothesis. Again, adjustments are necessary in order to correct for air density fluctuations and sensor-surface alignment break. As far as regards flux modelling, at leaf level the stomatal flux is simply obtained by multiplying the ozone concentration on the leaf with the stomatal conductance predicted by means of physiological models fed by meteorological parameter. At canopy level the stomatal flux is

  19. Multi-scale modeling with cellular automata: The complex automata approach

    NARCIS (Netherlands)

    Hoekstra, A.G.; Falcone, J.-L.; Caiazzo, A.; Chopard, B.

    2008-01-01

    Cellular Automata are commonly used to describe complex natural phenomena. In many cases it is required to capture the multi-scale nature of these phenomena. A single Cellular Automata model may not be able to efficiently simulate a wide range of spatial and temporal scales. It is our goal to

  20. Scaling up watershed model parameters: flow and load simulations of the Edisto River Basin, South Carolina, 2007-09

    Science.gov (United States)

    Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul

    2014-01-01

    hydrologic simulations, a visualization tool (the Edisto River Data Viewer) was developed to help assess trends and influencing variable in the stream ecosystem. Incorporated into the visualization tool were the water-quality load models TOPLOAD, TOPLOAD–H, and LOADEST. Because the focus of this investigation was on scaling up the models from McTier Creek, water-quality concentrations that were previously collected in the McTier Creek Basin were used in the water-quality load models.

  1. Modified Ashworth Scale (MAS) Model based on Clinical Data Measurement towards Quantitative Evaluation of Upper Limb Spasticity

    Science.gov (United States)

    Puzi, A. Ahmad; Sidek, S. N.; Mat Rosly, H.; Daud, N.; Yusof, H. Md

    2017-11-01

    Spasticity is common symptom presented amongst people with sensorimotor disabilities. Imbalanced signals from the central nervous systems (CNS) which are composed of the brain and spinal cord to the muscles ultimately leading to the injury and death of motor neurons. In clinical practice, the therapist assesses muscle spasticity using a standard assessment tool like Modified Ashworth Scale (MAS), Modified Tardiue Scale (MTS) or Fugl-Meyer Assessment (FMA). This is done subjectively based on the experience and perception of the therapist subjected to the patient fatigue level and body posture. However, the inconsistency in the assessment is prevalent and could affect the efficacy of the rehabilitation process. Thus, the aim of this paper is to describe the methodology of data collection and the quantitative model of MAS developed to satisfy its description. Two subjects with MAS of 2 and 3 spasticity levels were involved in the clinical data measurement. Their level of spasticity was verified by expert therapist using current practice. Data collection was established using mechanical system equipped with data acquisition system and LABVIEW software. The procedure engaged repeated series of flexion of the affected arm that was moved against the platform using a lever mechanism and performed by the therapist. The data was then analyzed to investigate the characteristics of spasticity signal in correspondence to the MAS description. Experimental results revealed that the methodology used to quantify spasticity satisfied the MAS tool requirement according to the description. Therefore, the result is crucial and useful towards the development of formal spasticity quantification model.

  2. Scale model study of the seismic response of a nuclear reactor core

    International Nuclear Information System (INIS)

    Dove, R.C.; Dunwoody, W.E.; Rhorer, R.L.

    1983-01-01

    The use of scale models to study the dynamics of a system of graphite core blocks used in certain nuclear reactor designs is described. Scaling laws, material selecton, model instrumentation to measure collision forces, and the response of several models to simulated seismic excitation are covered. The effects of Coulomb friction between the blocks and the clearance gaps between the blocks on the system response to seismic excitation are emphasized

  3. Macroscopic High-Temperature Structural Analysis Model of Small-Scale PCHE Prototype (II)

    International Nuclear Information System (INIS)

    Song, Kee Nam; Lee, Heong Yeon; Hong, Sung Deok; Park, Hong Yoon

    2011-01-01

    The IHX (intermediate heat exchanger) of a VHTR (very high-temperature reactor) is a core component that transfers the high heat generated by the VHTR at 950 .deg. C to a hydrogen production plant. Korea Atomic Energy Research Institute manufactured a small-scale prototype of a PCHE (printed circuit heat exchanger) that was being considered as a candidate for the IHX. In this study, as a part of high-temperature structural integrity evaluation of the small-scale PCHE prototype, we carried out high-temperature structural analysis modeling and macroscopic thermal and elastic structural analysis for the small-scale PCHE prototype under small-scale gas-loop test conditions. The modeling and analysis were performed as a precedent study prior to the performance test in the small-scale gas loop. The results obtained in this study will be compared with the test results for the small-scale PCHE. Moreover, these results will be used in the design of a medium-scale PCHE prototype

  4. Implications of Adolescents’ Acculturation Strategies for Personal and Collective Self-esteem

    OpenAIRE

    Giang, Michael T.; Wittig, Michele A.

    2006-01-01

    Berry, Trimble, and Olmedo’s (1986) acculturation model was used to investigate the relationship among adolescents’ acculturation strategies, personal self-esteem, and collective self-esteem. Using data from 427 high school students, factor analysis results distinguished Collective Self-esteem Scale constructs (Luhtanen & Crocker, 1992) from both ethnic identity and outgroup orientation subscales of the Multigroup Ethnic Identity Measure (Phinney, 1992). Subsequent results showed that: 1) bot...

  5. Rainfall Erosivity Database on the European Scale (REDES): A product of a high temporal resolution rainfall data collection in Europe

    Science.gov (United States)

    Panagos, Panos; Ballabio, Cristiano; Borrelli, Pasquale; Meusburger, Katrin; Alewell, Christine

    2016-04-01

    The erosive force of rainfall is expressed as rainfall erosivity. Rainfall erosivity considers the rainfall amount and intensity, and is most commonly expressed as the R-factor in the (R)USLE model. The R-factor is calculated from a series of single storm events by multiplying the total storm kinetic energy with the measured maximum 30-minutes rainfall intensity. This estimation requests high temporal resolution (e.g. 30 minutes) rainfall data for sufficiently long time periods (i.e. 20 years) which are not readily available at European scale. The European Commission's Joint Research Centre(JRC) in collaboration with national/regional meteorological services and Environmental Institutions made an extensive data collection of high resolution rainfall data in the 28 Member States of the European Union plus Switzerland in order to estimate rainfall erosivity in Europe. This resulted in the Rainfall Erosivity Database on the European Scale (REDES) which included 1,541 rainfall stations in 2014 and has been updated with 134 additional stations in 2015. The interpolation of those point R-factor values with a Gaussian Process Regression (GPR) model has resulted in the first Rainfall Erosivity map of Europe (Science of the Total Environment, 511, 801-815). The intra-annual variability of rainfall erosivity is crucial for modelling soil erosion on a monthly and seasonal basis. The monthly feature of rainfall erosivity has been added in 2015 as an advancement of REDES and the respective mean annual R-factor map. Almost 19,000 monthly R-factor values of REDES contributed to the seasonal and monthly assessments of rainfall erosivity in Europe. According to the first results, more than 50% of the total rainfall erosivity in Europe takes place in the period from June to September. The spatial patterns of rainfall erosivity have significant differences between Northern and Southern Europe as summer is the most erosive period in Central and Northern Europe and autumn in the

  6. Large scale model experimental analysis of concrete containment of nuclear power plant strengthened with externally wrapped carbon fiber sheets

    International Nuclear Information System (INIS)

    Yang Tao; Chen Xiaobing; Yue Qingrui

    2005-01-01

    Concrete containment of Nuclear Power Station is the last shield structure in case of nuclear leakage during an accident. The experiment model in this paper is a 1/10 large-scale model of a real-sized prestressed reinforced concrete containment. The model containment was loaded by hydraulic pressure which simulated the design pressure during the accident. Hundreds of sensors and advanced data-collect systems were used in the test. The containment was first loaded to the damage pressure then strengthened with externally wrapping Carbon fiber sheet around the outer surface of containment structure. Experimental results indicate that CFRP system can greatly increase the capacity of concrete containment to endure the inner pressure. CFRP system can also effectively confine the deformation and the cracks caused by loading. (authors)

  7. Islands Climatology at Local Scale. Downscaling with CIELO model

    Science.gov (United States)

    Azevedo, Eduardo; Reis, Francisco; Tomé, Ricardo; Rodrigues, Conceição

    2016-04-01

    Islands with horizontal scales of the order of tens of km, as is the case of the Atlantic Islands of Macaronesia, are subscale orographic features for Global Climate Models (GCMs) since the horizontal scales of these models are too coarse to give a detailed representation of the islands' topography. Even the Regional Climate Models (RCMs) reveals limitations when they are forced to reproduce the climate of small islands mainly by the way they flat and lowers the elevation of the islands, reducing the capacity of the model to reproduce important local mechanisms that lead to a very deep local climate differentiation. Important local thermodynamics mechanisms like Foehn effect, or the influence of topography on radiation balance, have a prominent role in the climatic spatial differentiation. Advective transport of air - and the consequent induced adiabatic cooling due to orography - lead to transformations of the state parameters of the air that leads to the spatial configuration of the fields of pressure, temperature and humidity. The same mechanism is in the origin of the orographic clouds cover that, besides the direct role as water source by the reinforcement of precipitation, act like a filter to direct solar radiation and as a source of long-wave radiation that affect the local balance of energy. Also, the saturation (or near saturation) conditions that they provide constitute a barrier to water vapour diffusion in the mechanisms of evapotranspiration. Topographic factors like slope, aspect and orographic mask have also significant importance in the local energy balance. Therefore, the simulation of the local scale climate (past, present and future) in these archipelagos requires the use of downscaling techniques to adjust locally outputs obtained at upper scales. This presentation will discuss and analyse the evolution of the CIELO model (acronym for Clima Insular à Escala LOcal) a statistical/dynamical technique developed at the University of the Azores

  8. Role of scaling in the statistical modelling of finance

    Indian Academy of Sciences (India)

    Modelling the evolution of a financial index as a stochastic process is a problem awaiting a full, satisfactory solution since it was first formulated by Bachelier in 1900. Here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model.

  9. Assessing the impacts of bait collection on inter-tidal sediment and the associated macrofaunal and bird communities: The importance of appropriate spatial scales.

    Science.gov (United States)

    Watson, G J; Murray, J M; Schaefer, M; Bonner, A; Gillingham, M

    2017-09-01

    Bait collection is a multibillion dollar worldwide activity that is often managed ineffectively. For managers to understand the impacts on protected inter-tidal mudflats and waders at appropriate spatial scales macrofaunal surveys combined with video recordings of birds and bait collectors were undertaken at two UK sites. Dug sediment constituted approximately 8% of the surveyed area at both sites and is less muddy (lower organic content) than undug sediment. This may have significant implications for turbidity. Differences in the macrofaunal community between dug and undug areas if the same shore height is compared as well as changes in the dispersion of the community occurred at one site. Collection also induces a 'temporary loss of habitat' for some birds as bait collector numbers negatively correlate with wader and gull abundance. Bait collection changes the coherence and ecological structure of inter-tidal mudflats as well as directly affecting wading birds. However, as β diversity increased we suggest that management at appropriate hectare/site scales could maximise biodiversity/function whilst still supporting collection. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  10. A Structural Equation Modelling of the Academic Self-Concept Scale

    Science.gov (United States)

    Matovu, Musa

    2014-01-01

    The study aimed at validating the academic self-concept scale by Liu and Wang (2005) in measuring academic self-concept among university students. Structural equation modelling was used to validate the scale which was composed of two subscales; academic confidence and academic effort. The study was conducted on university students; males and…

  11. Model for the separate collection of packaging waste in Portuguese low-performing recycling regions.

    Science.gov (United States)

    Oliveira, V; Sousa, V; Vaz, J M; Dias-Ferreira, C

    2018-06-15

    Separate collection of packaging waste (glass; plastic/metals; paper/cardboard), is currently a widespread practice throughout Europe. It enables the recovery of good quality recyclable materials. However, separate collection performance are quite heterogeneous, with some countries reaching higher levels than others. In the present work, separate collection of packaging waste has been evaluated in a low-performance recycling region in Portugal in order to investigate which factors are most affecting the performance in bring-bank collection system. The variability of separate collection yields (kg per inhabitant per year) among 42 municipalities was scrutinized for the year 2015 against possible explanatory factors. A total of 14 possible explanatory factors were analysed, falling into two groups: socio-economic/demographic and waste collection service related. Regression models were built in an attempt to evaluate the individual effect of each factor on separate collection yields and predict changes on the collection yields by acting on those factors. The best model obtained is capable to explain 73% of the variation found in the separate collection yields. The model includes the following statistically significant indicators affecting the success of separate collection yields: i) inhabitants per bring-bank; ii) relative accessibility to bring-banks; iii) degree of urbanization; iv) number of school years attended; and v) area. The model presented in this work was developed specifically for the bring-bank system, has an explanatory power and quantifies the impact of each factor on separate collection yields. It can therefore be used as a support tool by local and regional waste management authorities in the definition of future strategies to increase collection of recyclables of good quality and to achieve national and regional targets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate Scale Water Quality Model

    Energy Technology Data Exchange (ETDEWEB)

    Khangaonkar, Tarang; Sackmann, Brandon S.; Long, Wen; Mohamedali, Teizeen; Roberts, Mindy

    2012-10-01

    The Salish Sea, including Puget Sound, is a large estuarine system bounded by over seven thousand miles of complex shorelines, consists of several subbasins and many large inlets with distinct properties of their own. Pacific Ocean water enters Puget Sound through the Strait of Juan de Fuca at depth over the Admiralty Inlet sill. Ocean water mixed with freshwater discharges from runoff, rivers, and wastewater outfalls exits Puget Sound through the brackish surface outflow layer. Nutrient pollution is considered one of the largest threats to Puget Sound. There is considerable interest in understanding the effect of nutrient loads on the water quality and ecological health of Puget Sound in particular and the Salish Sea as a whole. The Washington State Department of Ecology (Ecology) contracted with Pacific Northwest National Laboratory (PNNL) to develop a coupled hydrodynamic and water quality model. The water quality model simulates algae growth, dissolved oxygen, (DO) and nutrient dynamics in Puget Sound to inform potential Puget Sound-wide nutrient management strategies. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or control human impacts to DO levels in the sensitive areas. The project did not include any additional data collection but instead relied on currently available information. This report describes model development effort conducted during the period 2009 to 2012 under a U.S. Environmental Protection Agency (EPA) cooperative agreement with PNNL, Ecology, and the University of Washington awarded under the National Estuary Program

  13. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  14. Finite size scaling of the Higgs-Yukawa model near the Gaussian fixed point

    Energy Technology Data Exchange (ETDEWEB)

    Chu, David Y.J.; Lin, C.J. David [National Chiao-Tung Univ., Hsinchu, Taiwan (China); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Knippschild, Bastian [HISKP, Bonn (Germany); Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Univ. Berlin (Germany)

    2016-12-15

    We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.

  15. Groundwater flow simulation on local scale. Setting boundary conditions of groundwater flow simulation on site scale model in the step 4

    International Nuclear Information System (INIS)

    Onoe, Hironori; Saegusa, Hiromitsu; Ohyama, Takuya

    2007-03-01

    Japan Atomic Energy Agency has been conducting a wide range of geoscientific research in order to build a foundation for multidisciplinary studies of the deep geological environment as a basis of research and development for geological disposal of nuclear wastes. Ongoing geoscientific research programs include the Regional Hydrogeological Study (RHS) project and Mizunami Underground Research Laboratory (MIU) project in the Tono region, Gifu Prefecture. The main goal of these projects is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological at several spatial scales. The RHS project is a Local scale study for understanding the groundwater flow system from the recharge area to the discharge area. The Surface-based Investigation Phase of the MIU project is a Site scale study for understanding the deep geological environment immediately surrounding the MIU construction site using a multiphase, iterative approach. In this study, the hydrogeological modeling and groundwater flow simulation on Local scale were carried out in order to set boundary conditions of the Site scale model based on the data obtained from surface-based investigations in the Step4 in Site scale of the MIU project. As a result of the study, boundary conditions for groundwater flow simulation on the Site scale model of the Step4 could be obtained. (author)

  16. Inferring spatial memory and spatiotemporal scaling from GPS data: comparing red deer Cervus elaphus movements with simulation models.

    Science.gov (United States)

    Gautestad, Arild O; Loe, Leif E; Mysterud, Atle

    2013-05-01

    1. Increased inference regarding underlying behavioural mechanisms of animal movement can be achieved by comparing GPS data with statistical mechanical movement models such as random walk and Lévy walk with known underlying behaviour and statistical properties. 2. GPS data are typically collected with ≥ 1 h intervals not exactly tracking every mechanistic step along the movement path, so a statistical mechanical model approach rather than a mechanistic approach is appropriate. However, comparisons require a coherent framework involving both scaling and memory aspects of the underlying process. Thus, simulation models have recently been extended to include memory-guided returns to previously visited patches, that is, site fidelity. 3. We define four main classes of movement, differing in incorporation of memory and scaling (based on respective intervals of the statistical fractal dimension D and presence/absence of site fidelity). Using three statistical protocols to estimate D and site fidelity, we compare these main movement classes with patterns observed in GPS data from 52 females of red deer (Cervus elaphus). 4. The results show best compliance with a scale-free and memory-enhanced kind of space use; that is, a power law distribution of step lengths, a fractal distribution of the spatial scatter of fixes and site fidelity. 5. Our study thus demonstrates how inference regarding memory effects and a hierarchical pattern of space use can be derived from analysis of GPS data. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  17. Symmetry-guided large-scale shell-model theory

    Czech Academy of Sciences Publication Activity Database

    Launey, K. D.; Dytrych, Tomáš; Draayer, J. P.

    2016-01-01

    Roč. 89, JUL (2016), s. 101-136 ISSN 0146-6410 R&D Projects: GA ČR GA16-16772S Institutional support: RVO:61389005 Keywords : Ab intio shell -model theory * Symplectic symmetry * Collectivity * Clusters * Hoyle state * Orderly patterns in nuclei from first principles Subject RIV: BE - Theoretical Physics Impact factor: 11.229, year: 2016

  18. Scaling behavior of an airplane-boarding model.

    Science.gov (United States)

    Brics, Martins; Kaupužs, Jevgenijs; Mahnke, Reinhard

    2013-04-01

    An airplane-boarding model, introduced earlier by Frette and Hemmer [Phys. Rev. E 85, 011130 (2012)], is studied with the aim of determining precisely its asymptotic power-law scaling behavior for a large number of passengers N. Based on Monte Carlo simulation data for very large system sizes up to N=2(16)=65536, we have analyzed numerically the scaling behavior of the mean boarding time and other related quantities. In analogy with critical phenomena, we have used appropriate scaling Ansätze, which include the leading term as some power of N (e.g., [proportionality]N(α) for ), as well as power-law corrections to scaling. Our results clearly show that α=1/2 holds with a very high numerical accuracy (α=0.5001±0.0001). This value deviates essentially from α=/~0.69, obtained earlier by Frette and Hemmer from data within the range 2≤N≤16. Our results confirm the convergence of the effective exponent α(eff)(N) to 1/2 at large N as observed by Bernstein. Our analysis explains this effect. Namely, the effective exponent α(eff)(N) varies from values about 0.7 for small system sizes to the true asymptotic value 1/2 at N→∞ almost linearly in N(-1/3) for large N. This means that the variation is caused by corrections to scaling, the leading correction-to-scaling exponent being θ≈1/3. We have estimated also other exponents: ν=1/2 for the mean number of passengers taking seats simultaneously in one time step, β=1 for the second moment of t(b), and γ≈1/3 for its variance.

  19. The mechanical properties modeling of nano-scale materials by molecular dynamics

    NARCIS (Netherlands)

    Yuan, C.; Driel, W.D. van; Poelma, R.; Zhang, G.Q.

    2012-01-01

    We propose a molecular modeling strategy which is capable of mod-eling the mechanical properties on nano-scale low-dielectric (low-k) materials. Such modeling strategy has been also validated by the bulking force of carbon nano tube (CNT). This modeling framework consists of model generation method,

  20. A hybrid plume model for local-scale dispersion

    Energy Technology Data Exchange (ETDEWEB)

    Nikmo, J.; Tuovinen, J.P.; Kukkonen, J.; Valkama, I.

    1997-12-31

    The report describes the contribution of the Finnish Meteorological Institute to the project `Dispersion from Strongly Buoyant Sources`, under the `Environment` programme of the European Union. The project addresses the atmospheric dispersion of gases and particles emitted from typical fires in warehouses and chemical stores. In the study only the `passive plume` regime, in which the influence of plume buoyancy is no longer important, is addressed. The mathematical model developed and its numerical testing is discussed. The model is based on atmospheric boundary-layer scaling theory. In the vicinity of the source, Gaussian equations are used in both the horizontal and vertical directions. After a specified transition distance, gradient transfer theory is applied in the vertical direction, while the horizontal dispersion is still assumed to be Gaussian. The dispersion parameters and eddy diffusivity are modelled in a form which facilitates the use of a meteorological pre-processor. Also a new model for the vertical eddy diffusivity (K{sub z}), which is a continuous function of height in the various atmospheric scaling regions is presented. The model includes a treatment of the dry deposition of gases and particulate matter, but wet deposition has been neglected. A numerical solver for the atmospheric diffusion equation (ADE) has been developed. The accuracy of the numerical model was analysed by comparing the model predictions with two analytical solutions of ADE. The numerical deviations of the model predictions from these analytic solutions were less than two per cent for the computational regime. The report gives numerical results for the vertical profiles of the eddy diffusivity and the dispersion parameters, and shows spatial concentration distributions in various atmospheric conditions 39 refs.