WorldWideScience

Sample records for larger scale models

  1. Scaling local species-habitat relations to the larger landscape with a hierarchical spatial count model

    Science.gov (United States)

    Thogmartin, W.E.; Knutson, M.G.

    2007-01-01

    Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.

  2. Parameterization of cirrus microphysical and radiative properties in larger-scale models

    International Nuclear Information System (INIS)

    Heymsfield, A.J.; Coen, J.L.

    1994-01-01

    This study exploits measurements in clouds sampled during several field programs to develop and validate parameterizations that represent the physical and radiative properties of convectively generated cirrus clouds in intermediate and large-scale models. The focus is on cirrus anvils because they occur frequently, cover large areas, and play a large role in the radiation budget. Preliminary work focuses on understanding the microphysical, radiative, and dynamical processes that occur in these clouds. A detailed microphysical package has been constructed that considers the growth of the following hydrometer types: water drops, needles, plates, dendrites, columns, bullet rosettes, aggregates, graupel, and hail. Particle growth processes include diffusional and accretional growth, aggregation, sedimentation, and melting. This package is being implemented in a simple dynamical model that tracks the evolution and dispersion of hydrometers in a stratiform anvil cloud. Given the momentum, vapor, and ice fluxes into the stratiform region and the temperature and humidity structure in the anvil's environment, this model will suggest anvil properties and structure

  3. Electrodialytic removal of cadmium from biomass combustion fly ash in larger scale

    DEFF Research Database (Denmark)

    Pedersen, Anne Juul; Ottosen, Lisbeth M.; Simonsen, Peter

    2005-01-01

    Due to a high concentration of the toxic heavy metal cadmium (Cd), biomass combustion fly ash often fails to meet the Danish legislative requirements for recycling on agricultural fields. It has previously been shown that it is possible to reduce the concentration of Cd in different bio ashes...... significantly by using electrodialytic remediation, an electrochemically assisted extraction method. In this work the potential of the method was demonstrated in larger scale. Three different experimental set-ups were used, ranging from bench-scale (25 L ash suspension) to pilot scale (0.3 - 3 m3......). The experimental ash was a straw combustion fly ash suspended in water. Within 4 days of remediation, Cd concentrations below the limiting concentration of 5.0 mg Cd/kg DM for straw ash were reached. On the basis of these results, the energy costs for remediation of ash in industrial scale have been estimated...

  4. Persistent Homology fingerprinting of microstructural controls on larger-scale fluid flow in porous media

    Science.gov (United States)

    Moon, C.; Mitchell, S. A.; Callor, N.; Dewers, T. A.; Heath, J. E.; Yoon, H.; Conner, G. R.

    2017-12-01

    Traditional subsurface continuum multiphysics models include useful yet limiting geometrical assumptions: penny- or disc-shaped cracks, spherical or elliptical pores, bundles of capillary tubes, cubic law fracture permeability, etc. Each physics (flow, transport, mechanics) uses constitutive models with an increasing number of fit parameters that pertain to the microporous structure of the rock, but bear no inter-physics relationships or self-consistency. Recent advances in digital rock physics and pore-scale modeling link complex physics to detailed pore-level geometries, but measures for upscaling are somewhat unsatisfactory and come at a high computational cost. Continuum mechanics rely on a separation between small scale pore fluctuations and larger scale heterogeneity (and perhaps anisotropy), but this can break down (particularly for shales). Algebraic topology offers powerful mathematical tools for describing a local-to-global structure of shapes. Persistent homology, in particular, analyzes the dynamics of topological features and summarizes into numeric values. It offers a roadmap to both "fingerprint" topologies of pore structure and multiscale connectedness as well as links pore structure to physical behavior, thus potentially providing a means to relate the dependence of constitutive behaviors of pore structures in a self-consistent way. We present a persistence homology (PH) analysis framework of 3D image sets including a focused ion beam-scanning electron microscopy data set of the Selma Chalk. We extract structural characteristics of sampling volumes via persistence homology and fit a statistical model using the summarized values to estimate porosity, permeability, and connectivity—Lattice Boltzmann methods for single phase flow modeling are used to obtain the relationships. These PH methods allow for prediction of geophysical properties based on the geometry and connectivity in a computationally efficient way. Sandia National Laboratories is a

  5. Evaluation of scale effects on hydraulic characteristics of fractured rock using fracture network model

    International Nuclear Information System (INIS)

    Ijiri, Yuji; Sawada, Atsushi; Uchida, Masahiro; Ishiguro, Katsuhiko; Umeki, Hiroyuki; Sakamoto, Kazuhiko; Ohnishi, Yuzo

    2001-01-01

    It is important to take into account scale effects on fracture geometry if the modeling scale is much larger than the in-situ observation scale. The scale effect on fracture trace length, which is the most scale dependent parameter, is investigated using fracture maps obtained at various scales in tunnel and dam sites. We found that the distribution of fracture trace length follows negative power law distribution in regardless of locations and rock types. The hydraulic characteristics of fractured rock is also investigated by numerical analysis of discrete fracture network (DFN) model where power law distribution of fracture radius is adopted. We found that as the exponent of power law distribution become larger, the hydraulic conductivity of DFN model increases and the travel time in DFN model decreases. (author)

  6. Spatial-Scale Characteristics of Precipitation Simulated by Regional Climate Models and the Implications for Hydrological Modeling

    DEFF Research Database (Denmark)

    Rasmussen, S.H.; Christensen, J. H.; Drews, Martin

    2012-01-01

    Precipitation simulated by regional climate models (RCMs) is generally biased with respect to observations, especially at the local scale of a few tens of kilometers. This study investigates how well two different RCMs are able to reproduce the spatial correlation patterns of observed summer...... length scales on the order of 130 km are found in both observed data and RCM simulations. When simulations and observations are aggregated to different grid sizes, the pattern correlation significantly decreases when the aggregation length is less than roughly 100 km. Furthermore, the intermodel standard......, reflecting larger predictive certainty of the RCMs at larger scales. The findings on aggregated grid scales are shown to be largely independent of the underlying RCMs grid resolutions but not of the overall size of RCM domain. With regard to hydrological modeling applications, these findings indicate...

  7. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    International Nuclear Information System (INIS)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-01-01

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will

  8. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-04-19

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will

  9. On Spatial Resolution in Habitat Models: Can Small-scale Forest Structure Explain Capercaillie Numbers?

    Directory of Open Access Journals (Sweden)

    Ilse Storch

    2002-06-01

    Full Text Available This paper explores the effects of spatial resolution on the performance and applicability of habitat models in wildlife management and conservation. A Habitat Suitability Index (HSI model for the Capercaillie (Tetrao urogallus in the Bavarian Alps, Germany, is presented. The model was exclusively built on non-spatial, small-scale variables of forest structure and without any consideration of landscape patterns. The main goal was to assess whether a HSI model developed from small-scale habitat preferences can explain differences in population abundance at larger scales. To validate the model, habitat variables and indirect sign of Capercaillie use (such as feathers or feces were mapped in six study areas based on a total of 2901 20 m radius (for habitat variables and 5 m radius sample plots (for Capercaillie sign. First, the model's representation of Capercaillie habitat preferences was assessed. Habitat selection, as expressed by Ivlev's electivity index, was closely related to HSI scores, increased from poor to excellent habitat suitability, and was consistent across all study areas. Then, habitat use was related to HSI scores at different spatial scales. Capercaillie use was best predicted from HSI scores at the small scale. Lowering the spatial resolution of the model stepwise to 36-ha, 100-ha, 400-ha, and 2000-ha areas and relating Capercaillie use to aggregated HSI scores resulted in a deterioration of fit at larger scales. Most importantly, there were pronounced differences in Capercaillie abundance at the scale of study areas, which could not be explained by the HSI model. The results illustrate that even if a habitat model correctly reflects a species' smaller scale habitat preferences, its potential to predict population abundance at larger scales may remain limited.

  10. Design of scaled down structural models

    Science.gov (United States)

    Simitses, George J.

    1994-07-01

    In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.

  11. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    Science.gov (United States)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  12. Fractionaly Integrated Flux model and Scaling Laws in Weather and Climate

    Science.gov (United States)

    Schertzer, Daniel; Lovejoy, Shaun

    2013-04-01

    The Fractionaly Integrated Flux model (FIF) has been extensively used to model intermittent observables, like the velocity field, by defining them with the help of a fractional integration of a conservative (i.e. strictly scale invariant) flux, such as the turbulent energy flux. It indeed corresponds to a well-defined modelling that yields the observed scaling laws. Generalised Scale Invariance (GSI) enables FIF to deal with anisotropic fractional integrations and has been rather successful to define and model a unique regime of scaling anisotropic turbulence up to planetary scales. This turbulence has an effective dimension of 23/9=2.55... instead of the classical hypothesised 2D and 3D turbulent regimes, respectively for large and small spatial scales. It therefore theoretically eliminates a non plausible "dimension transition" between these two regimes and the resulting requirement of a turbulent energy "mesoscale gap", whose empirical evidence has been brought more and more into question. More recently, GSI-FIF was used to analyse climate, therefore at much larger time scales. Indeed, the 23/9-dimensional regime necessarily breaks up at the outer spatial scales. The corresponding transition range, which can be called "macroweather", seems to have many interesting properties, e.g. it rather corresponds to a fractional differentiation in time with a roughly flat frequency spectrum. Furthermore, this transition yields the possibility to have at much larger time scales scaling space-time climate fluctuations with a much stronger scaling anisotropy between time and space. Lovejoy, S. and D. Schertzer (2013). The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge Press (in press). Schertzer, D. et al. (1997). Fractals 5(3): 427-471. Schertzer, D. and S. Lovejoy (2011). International Journal of Bifurcation and Chaos 21(12): 3417-3456.

  13. Downscaling modelling system for multi-scale air quality forecasting

    Science.gov (United States)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a

  14. Protecting the larger fish: an ecological, economical and evolutionary analysis using a demographic model

    DEFF Research Database (Denmark)

    Verdiell, Nuria Calduch

    . Recently, there is increasing evidence that this size-selective fishing reduces the chances of maintaining populations at levels sufficient to produce maximum sustainable yields, the chances of recovery/rebuilding populations that have been depleted/collapsed and may causes rapid evolutionary changes...... and the consequent changes in yield. We attempt to evaluate the capability of the larger fish to mitigate the evolutionary change on life-history traits caused by fishing, while also maintaining a sustainable annual yield. This is achieved by calculating the expected selection response on three life-history traits......Many marine fish stocks are reported as overfished on a global scale. This overfishing not only removes fish biomass, but also causes dramatic changes in the age and size structure of fish stocks. In particular, targeting of the larger individuals truncates the age and size structure of stocks...

  15. Spatiotemporal exploratory models for broad-scale survey data.

    Science.gov (United States)

    Fink, Daniel; Hochachka, Wesley M; Zuckerberg, Benjamin; Winkler, David W; Shaby, Ben; Munson, M Arthur; Hooker, Giles; Riedewald, Mirek; Sheldon, Daniel; Kelling, Steve

    2010-12-01

    The distributions of animal populations change and evolve through time. Migratory species exploit different habitats at different times of the year. Biotic and abiotic features that determine where a species lives vary due to natural and anthropogenic factors. This spatiotemporal variation needs to be accounted for in any modeling of species' distributions. In this paper we introduce a semiparametric model that provides a flexible framework for analyzing dynamic patterns of species occurrence and abundance from broad-scale survey data. The spatiotemporal exploratory model (STEM) adds essential spatiotemporal structure to existing techniques for developing species distribution models through a simple parametric structure without requiring a detailed understanding of the underlying dynamic processes. STEMs use a multi-scale strategy to differentiate between local and global-scale spatiotemporal structure. A user-specified species distribution model accounts for spatial and temporal patterning at the local level. These local patterns are then allowed to "scale up" via ensemble averaging to larger scales. This makes STEMs especially well suited for exploring distributional dynamics arising from a variety of processes. Using data from eBird, an online citizen science bird-monitoring project, we demonstrate that monthly changes in distribution of a migratory species, the Tree Swallow (Tachycineta bicolor), can be more accurately described with a STEM than a conventional bagged decision tree model in which spatiotemporal structure has not been imposed. We also demonstrate that there is no loss of model predictive power when a STEM is used to describe a spatiotemporal distribution with very little spatiotemporal variation; the distribution of a nonmigratory species, the Northern Cardinal (Cardinalis cardinalis).

  16. Modeling nutrient in-stream processes at the watershed scale using Nutrient Spiralling metrics

    Science.gov (United States)

    Marcé, R.; Armengol, J.

    2009-07-01

    One of the fundamental problems of using large-scale biogeochemical models is the uncertainty involved in aggregating the components of fine-scale deterministic models in watershed applications, and in extrapolating the results of field-scale measurements to larger spatial scales. Although spatial or temporal lumping may reduce the problem, information obtained during fine-scale research may not apply to lumped categories. Thus, the use of knowledge gained through fine-scale studies to predict coarse-scale phenomena is not straightforward. In this study, we used the nutrient uptake metrics defined in the Nutrient Spiralling concept to formulate the equations governing total phosphorus in-stream fate in a deterministic, watershed-scale biogeochemical model. Once the model was calibrated, fitted phosphorus retention metrics where put in context of global patterns of phosphorus retention variability. For this purpose, we calculated power regressions between phosphorus retention metrics, streamflow, and phosphorus concentration in water using published data from 66 streams worldwide, including both pristine and nutrient enriched streams. Performance of the calibrated model confirmed that the Nutrient Spiralling formulation is a convenient simplification of the biogeochemical transformations involved in total phosphorus in-stream fate. Thus, this approach may be helpful even for customary deterministic applications working at short time steps. The calibrated phosphorus retention metrics were comparable to field estimates from the study watershed, and showed high coherence with global patterns of retention metrics from streams of the world. In this sense, the fitted phosphorus retention metrics were similar to field values measured in other nutrient enriched streams. Analysis of the bibliographical data supports the view that nutrient enriched streams have lower phosphorus retention efficiency than pristine streams, and that this efficiency loss is maintained in a wide

  17. Lecture archiving on a larger scale at the University of Michigan and CERN

    Energy Technology Data Exchange (ETDEWEB)

    Herr, Jeremy; Lougheed, Robert; Neal, Homer A, E-mail: herrj@umich.ed [University of Michigan, 450 Church St., Ann Arbor, MI 48109 (United States)

    2010-04-01

    The ATLAS Collaboratory Project at the University of Michigan has been a leader in the area of collaborative tools since 1999. Its activities include the development of standards, software and hardware tools for lecture archiving, and making recommendations for videoconferencing and remote teaching facilities. Starting in 2006 our group became involved in classroom recordings, and in early 2008 we spawned CARMA, a University-wide recording service. This service uses a new portable recording system that we developed. Capture, archiving and dissemination of rich multimedia content from lectures, tutorials and classes are increasingly widespread activities among universities and research institutes. A growing array of related commercial and open source technologies is becoming available, with several new products introduced in the last couple years. As the result of a new close partnership between U-M and CERN IT, a market survey of these products was conducted and a summary of the results are presented here. It is informing an ambitious effort in 2009 to equip many CERN rooms with automated lecture archiving systems, on a much larger scale than before. This new technology is being integrated with CERN's existing webcast, CDS, and Indico applications.

  18. Lecture archiving on a larger scale at the University of Michigan and CERN

    International Nuclear Information System (INIS)

    Herr, Jeremy; Lougheed, Robert; Neal, Homer A

    2010-01-01

    The ATLAS Collaboratory Project at the University of Michigan has been a leader in the area of collaborative tools since 1999. Its activities include the development of standards, software and hardware tools for lecture archiving, and making recommendations for videoconferencing and remote teaching facilities. Starting in 2006 our group became involved in classroom recordings, and in early 2008 we spawned CARMA, a University-wide recording service. This service uses a new portable recording system that we developed. Capture, archiving and dissemination of rich multimedia content from lectures, tutorials and classes are increasingly widespread activities among universities and research institutes. A growing array of related commercial and open source technologies is becoming available, with several new products introduced in the last couple years. As the result of a new close partnership between U-M and CERN IT, a market survey of these products was conducted and a summary of the results are presented here. It is informing an ambitious effort in 2009 to equip many CERN rooms with automated lecture archiving systems, on a much larger scale than before. This new technology is being integrated with CERN's existing webcast, CDS, and Indico applications.

  19. Economic trends of tokamak power plants independent of physics scaling models

    International Nuclear Information System (INIS)

    Reid, R.L.; Steiner, D.

    1978-01-01

    This study examines the effects of plasma radius, field on axis, plasma impurity level, and aspect ratio on power level and unit capital cost, $/kW/sub e/, of tokamak power plants sized independent of plasma physics scaling models. It is noted that tokamaks sized in this manner are thermally unstable based on trapped particle scaling relationships. It is observed that there is an economic advantage for larger power level tokamaks achieved by physics independent sizing; however, the incentive for increased power levels is less than that for fission reactors. It is further observed that the economic advantage of these larger power level tokamaks is decreased when plasma thermal stability measures are incorporated, such as by increasing the plasma impurity concentration. This trend of economy with size obtained by physics independent sizing is opposite to that observed when the tokamak designs are constrained to obey the trapped particle and empirical scaling relationships

  20. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, Christian [Monash Univ., Melbourne, VIC (Australia)

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  1. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    Science.gov (United States)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  2. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  3. Nudging technique for scale bridging in air quality/climate atmospheric composition modelling

    Directory of Open Access Journals (Sweden)

    A. Maurizi

    2012-04-01

    Full Text Available The interaction between air quality and climate involves dynamical scales that cover a very wide range. Bridging these scales in numerical simulations is fundamental in studies devoted to megacity/hot-spot impacts on larger scales. A technique based on nudging is proposed as a bridging method that can couple different models at different scales.

    Here, nudging is used to force low resolution chemical composition models with a run of a high resolution model on a critical area. A one-year numerical experiment focused on the Po Valley hot spot is performed using the BOLCHEM model to asses the method.

    The results show that the model response is stable to perturbation induced by the nudging and that, taking the high resolution run as a reference, performances of the nudged run increase with respect to the non-forced run. The effect outside the forcing area depends on transport and is significant in a relevant number of events although it becomes weak on seasonal or yearly basis.

  4. A high-resolution global-scale groundwater model

    Science.gov (United States)

    de Graaf, I. E. M.; Sutanudjaja, E. H.; van Beek, L. P. H.; Bierkens, M. F. P.

    2015-02-01

    Groundwater is the world's largest accessible source of fresh water. It plays a vital role in satisfying basic needs for drinking water, agriculture and industrial activities. During times of drought groundwater sustains baseflow to rivers and wetlands, thereby supporting ecosystems. Most global-scale hydrological models (GHMs) do not include a groundwater flow component, mainly due to lack of geohydrological data at the global scale. For the simulation of lateral flow and groundwater head dynamics, a realistic physical representation of the groundwater system is needed, especially for GHMs that run at finer resolutions. In this study we present a global-scale groundwater model (run at 6' resolution) using MODFLOW to construct an equilibrium water table at its natural state as the result of long-term climatic forcing. The used aquifer schematization and properties are based on available global data sets of lithology and transmissivities combined with the estimated thickness of an upper, unconfined aquifer. This model is forced with outputs from the land-surface PCRaster Global Water Balance (PCR-GLOBWB) model, specifically net recharge and surface water levels. A sensitivity analysis, in which the model was run with various parameter settings, showed that variation in saturated conductivity has the largest impact on the groundwater levels simulated. Validation with observed groundwater heads showed that groundwater heads are reasonably well simulated for many regions of the world, especially for sediment basins (R2 = 0.95). The simulated regional-scale groundwater patterns and flow paths demonstrate the relevance of lateral groundwater flow in GHMs. Inter-basin groundwater flows can be a significant part of a basin's water budget and help to sustain river baseflows, especially during droughts. Also, water availability of larger aquifer systems can be positively affected by additional recharge from inter-basin groundwater flows.

  5. Modeling sediment yield in small catchments at event scale: Model comparison, development and evaluation

    Science.gov (United States)

    Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.

    2017-12-01

    Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.

  6. A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation

    Science.gov (United States)

    Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.

    2016-12-01

    Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.

  7. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  8. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kuhn, William L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rector, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  9. What spatial scales are believable for climate model projections of sea surface temperature?

    Science.gov (United States)

    Kwiatkowski, Lester; Halloran, Paul R.; Mumby, Peter J.; Stephenson, David B.

    2014-09-01

    Earth system models (ESMs) provide high resolution simulations of variables such as sea surface temperature (SST) that are often used in off-line biological impact models. Coral reef modellers have used such model outputs extensively to project both regional and global changes to coral growth and bleaching frequency. We assess model skill at capturing sub-regional climatologies and patterns of historical warming. This study uses an established wavelet-based spatial comparison technique to assess the skill of the coupled model intercomparison project phase 5 models to capture spatial SST patterns in coral regions. We show that models typically have medium to high skill at capturing climatological spatial patterns of SSTs within key coral regions, with model skill typically improving at larger spatial scales (≥4°). However models have much lower skill at modelling historical warming patters and are shown to often perform no better than chance at regional scales (e.g. Southeast Asian) and worse than chance at finer scales (coral bleaching frequency and other marine processes linked to SST warming.

  10. Characteristics of the Residual Stress tensor when filter width is larger than the Ozmidov scale

    Science.gov (United States)

    de Bragança Alves, Felipe Augusto; de Bruyn Kops, Stephen

    2017-11-01

    In stratified turbulence, the residual stress tensor is statistically anisotropic unless the smallest resolved length scale is smaller than the Ozmidov scale and the buoyancy Reynolds number is sufficiently high for there to exist a range of scales that is statistically isotropic. We present approximations to the residual stress tensor that are derived analytically. These approximations are evaluated by filtering data from direct numerical simulations of homogeneous stratified turbulence, with unity Prandtl number, resolved on up to 8192 × 8192 × 4096 grid points along with an isotropic homogeneous case resolved on 81923 grid points. It is found that the best possible scaling of the strain rate tensor yields a residual stress tensor (RST) that is less well statistically aligned with the exact RST than a randomly generated tensor. It is also found that, while a scaling of the strain rate tensor can dissipate the right amount of energy, it produces incorrect anisotropic dissipation, removing energy from the wrong components of the velocity vector. We find that a combination of the strain rate tensor and a tensor related to energy redistribution caused by a Newtonian fluid viscous stress yields an excellent tensorial basis for modelling the RST.

  11. Variations of the petrophysical properties of rocks with increasing hydrocarbons content and their implications at larger scale: insights from the Majella reservoir (Italy)

    Science.gov (United States)

    Trippetta, Fabio; Ruggieri, Roberta; Lipparini, Lorenzo

    2016-04-01

    porosity. Preliminary data also suggest a different behaviour at increasing confining pressure for clean and-oil bearing samples: almost perfectly elastic behaviour for oil-bearing samples and more inelastic behaviours for cleaner samples. Thus HC presence appears to contrast the increase of confining pressure acting as semi-fluids, reducing the rock inelastic compaction and enhancing its elastic behaviour. Trying to upscale our rock-physics results, we started from wells and laboratory data on stratigraphy, porosity and Vp in order to simulate the effect of the HC presence at larger scale, using Petrel® software. The developed synthetic model highlights that Vp, which is primarily controlled by porosity, changes significantly within oil-bearing portions, with a notable impact on the velocity model that should be adopted. Moreover we are currently performing laboratory tests in order to evaluate the changes in the elastic parameters with the aim of modelling the effects of the HC on the mechanical behaviour of the involved rocks at larger scale.

  12. Uncertainties in modelling and scaling of critical flows and pump model in TRAC-PF1/MOD1

    International Nuclear Information System (INIS)

    Rohatgi, U.S.; Yu, Wen-Shi.

    1987-01-01

    The USNRC has established a Code Scalability, Applicability and Uncertainty (CSAU) evaluation methodology to quantify the uncertainty in the prediction of safety parameters by the best estimate codes. These codes can then be applied to evaluate the Emergency Core Cooling System (ECCS). The TRAC-PF1/MOD1 version was selected as the first code to undergo the CSAU analysis for LBLOCA applications. It was established through this methodology that break flow and pump models are among the top ranked models in the code affecting the peak clad temperature (PCT) prediction for LBLOCA. The break flow model bias or discrepancy and the uncertainty were determined by modelling the test section near the break for 12 Marviken tests. It was observed that the TRAC-PF1/MOD1 code consistently underpredicts the break flow rate and that the prediction improved with increasing pipe length (larger L/D). This is true for both subcooled and two-phase critical flows. A pump model was developed from Westinghouse (1/3 scale) data. The data represent the largest available test pump relevant to Westinghouse PWRs. It was then shown through the analysis of CE and CREARE pump data that larger pumps degrade less and also that pumps degrade less at higher pressures. Since the model developed here is based on the 1/3 scale pump and on low pressure data, it is conservative and will overpredict the degradation when applied to PWRs

  13. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    International Nuclear Information System (INIS)

    Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro

    2015-01-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to ''address uncertainties and increase confidence in the projected, full-scale mixing performance and operations'' in the Waste Treatment and Immobilization Plant (WTP).

  14. Theoretical explanation of present mirror experiments and linear stability of larger scaled machines

    International Nuclear Information System (INIS)

    Berk, H.L.; Baldwin, D.E.; Cutler, T.A.; Lodestro, L.L.; Maron, N.; Pearlstein, L.D.; Rognlien, T.D.; Stewart, J.J.; Watson, D.C.

    1976-01-01

    A quasilinear model for the evolution of the 2XIIB mirror experiment is presented and shown to reproduce the time evolution of the experiment. From quasilinear theory it follows that the energy lifetime is the Spitzer electron drag time for T/sub e/ approximately less than 0.1T/sub i/. By computing the stability boundary of the DCLC mode, with warm plasma stabilization, the electron temperature is predicted as a function of radial scale length. In addition, the effect of finite length corrections to the Alfven cyclotron mode is assessed

  15. What is at stake in multi-scale approaches

    International Nuclear Information System (INIS)

    Jamet, Didier

    2008-01-01

    Full text of publication follows: Multi-scale approaches amount to analyzing physical phenomena at small space and time scales in order to model their effects at larger scales. This approach is very general in physics and engineering; one of the best examples of success of this approach is certainly statistical physics that allows to recover classical thermodynamics and to determine the limits of application of classical thermodynamics. Getting access to small scale information aims at reducing the models' uncertainty but it has a cost: fine scale models may be more complex than larger scale models and their resolution may require the development of specific and possibly expensive methods, numerical simulation techniques and experiments. For instance, in applications related to nuclear engineering, the application of computational fluid dynamics instead of cruder models is a formidable engineering challenge because it requires resorting to high performance computing. Likewise, in two-phase flow modeling, the techniques of direct numerical simulation, where all the interfaces are tracked individually and where all turbulence scales are captured, are getting mature enough to be considered for averaged modeling purposes. However, resolving small scale problems is a necessary step but it is not sufficient in a multi-scale approach. An important modeling challenge is to determine how to treat small scale data in order to get relevant information for larger scale models. For some applications, such as single-phase turbulence or transfers in porous media, this up-scaling approach is known and is now used rather routinely. However, in two-phase flow modeling, the up-scaling approach is not as mature and specific issues must be addressed that raise fundamental questions. This will be discussed and illustrated. (author)

  16. Review of ultimate pressure capacity test of containment structure and scale model design techniques

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jeong Moon; Choi, In Kil [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-03-01

    This study was performed to obtain the basic knowledge of the scaled model test through the review of experimental studies conducted in foreign countries. The results of this study will be used for the wall segment test planed in next year. It was concluded from the previous studies that the larger the model, the greater the trust of the community in the obtained results. It is recommended that a scale model 1/4 - 1/6 be suitable considering the characteristics of concrete, reinforcement, liner and tendon. Such a large scale model test require large amounts of time and budget. Because of these reasons, it is concluded that the containment wall segment test with analytical studies is efficient for the verification of the ultimate pressure capacity of the containment structures. 57 refs., 46 figs., 11 tabs. (Author)

  17. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    Science.gov (United States)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  18. Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations

    Directory of Open Access Journals (Sweden)

    C. Orbe

    2018-05-01

    Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  19. Demonstrating the value of larger ensembles in forecasting physical systems

    Directory of Open Access Journals (Sweden)

    Reason L. Machete

    2016-12-01

    its relative information content (in bits using a proper skill score. Doubling the ensemble size is demonstrated to yield a non-trivial increase in the information content (forecast skill for an ensemble with well over 16 members; this result stands in forecasting a mathematical system and a physical system. Indeed, even at the largest ensemble sizes considered (128 and 256, there are lead times where the forecast information is still increasing with ensemble size. Ultimately, model error will limit the value of ever larger ensembles. No support is found, however, for limiting design studies to the sizes commonly found in seasonal and climate studies. It is suggested that ensemble size be considered more explicitly in future design studies of forecast systems on all time scales.

  20. Multi-scale modeling of ductile failure in metallic alloys

    International Nuclear Information System (INIS)

    Pardoen, Th.; Scheyvaerts, F.; Simar, A.; Tekoglu, C.; Onck, P.R.

    2010-01-01

    Micro-mechanical models for ductile failure have been developed in the seventies and eighties essentially to address cracking in structural applications and complement the fracture mechanics approach. Later, this approach has become attractive for physical metallurgists interested by the prediction of failure during forming operations and as a guide for the design of more ductile and/or high-toughness microstructures. Nowadays, a realistic treatment of damage evolution in complex metallic microstructures is becoming feasible when sufficiently sophisticated constitutive laws are used within the context of a multilevel modelling strategy. The current understanding and the state of the art models for the nucleation, growth and coalescence of voids are reviewed with a focus on the underlying physics. Considerations are made about the introduction of the different length scales associated with the microstructure and damage process. Two applications of the methodology are then described to illustrate the potential of the current models. The first application concerns the competition between intergranular and transgranular ductile fracture in aluminum alloys involving soft precipitate free zones along the grain boundaries. The second application concerns the modeling of ductile failure in friction stir welded joints, a problem which also involves soft and hard zones, albeit at a larger scale. (authors)

  1. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  2. Workshop on Human Activity at Scale in Earth System Models

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Melissa R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Coletti, Mark A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kennedy, Joseph H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nair, Sujithkumar S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Omitaomu, Olufemi A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-01-26

    Changing human activity within a geographical location may have significant influence on the global climate, but that activity must be parameterized in such a way as to allow these high-resolution sub-grid processes to affect global climate within that modeling framework. Additionally, we must have tools that provide decision support and inform local and regional policies regarding mitigation of and adaptation to climate change. The development of next-generation earth system models, that can produce actionable results with minimum uncertainties, depends on understanding global climate change and human activity interactions at policy implementation scales. Unfortunately, at best we currently have only limited schemes for relating high-resolution sectoral emissions to real-time weather, ultimately to become part of larger regions and well-mixed atmosphere. Moreover, even our understanding of meteorological processes at these scales is imperfect. This workshop addresses these shortcomings by providing a forum for discussion of what we know about these processes, what we can model, where we have gaps in these areas and how we can rise to the challenge to fill these gaps.

  3. Relevance of multiple spatial scales in habitat models: A case study with amphibians and grasshoppers

    Science.gov (United States)

    Altmoos, Michael; Henle, Klaus

    2010-11-01

    Habitat models for animal species are important tools in conservation planning. We assessed the need to consider several scales in a case study for three amphibian and two grasshopper species in the post-mining landscapes near Leipzig (Germany). The two species groups were selected because habitat analyses for grasshoppers are usually conducted on one scale only whereas amphibians are thought to depend on more than one spatial scale. First, we analysed how the preference to single habitat variables changed across nested scales. Most environmental variables were only significant for a habitat model on one or two scales, with the smallest scale being particularly important. On larger scales, other variables became significant, which cannot be recognized on lower scales. Similar preferences across scales occurred in only 13 out of 79 cases and in 3 out of 79 cases the preference and avoidance for the same variable were even reversed among scales. Second, we developed habitat models by using a logistic regression on every scale and for all combinations of scales and analysed how the quality of habitat models changed with the scales considered. To achieve a sufficient accuracy of the habitat models with a minimum number of variables, at least two scales were required for all species except for Bufo viridis, for which a single scale, the microscale, was sufficient. Only for the European tree frog ( Hyla arborea), at least three scales were required. The results indicate that the quality of habitat models increases with the number of surveyed variables and with the number of scales, but costs increase too. Searching for simplifications in multi-scaled habitat models, we suggest that 2 or 3 scales should be a suitable trade-off, when attempting to define a suitable microscale.

  4. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  5. Examining the Variability of Sleep Patterns during Treatment for Chronic Insomnia: Application of a Location-Scale Mixed Model.

    Science.gov (United States)

    Ong, Jason C; Hedeker, Donald; Wyatt, James K; Manber, Rachel

    2016-06-15

    The purpose of this study was to introduce a novel statistical technique called the location-scale mixed model that can be used to analyze the mean level and intra-individual variability (IIV) using longitudinal sleep data. We applied the location-scale mixed model to examine changes from baseline in sleep efficiency on data collected from 54 participants with chronic insomnia who were randomized to an 8-week Mindfulness-Based Stress Reduction (MBSR; n = 19), an 8-week Mindfulness-Based Therapy for Insomnia (MBTI; n = 19), or an 8-week self-monitoring control (SM; n = 16). Sleep efficiency was derived from daily sleep diaries collected at baseline (days 1-7), early treatment (days 8-21), late treatment (days 22-63), and post week (days 64-70). The behavioral components (sleep restriction, stimulus control) were delivered during late treatment in MBTI. For MBSR and MBTI, the pre-to-post change in mean levels of sleep efficiency were significantly larger than the change in mean levels for the SM control, but the change in IIV was not significantly different. During early and late treatment, MBSR showed a larger increase in mean levels of sleep efficiency and a larger decrease in IIV relative to the SM control. At late treatment, MBTI had a larger increase in the mean level of sleep efficiency compared to SM, but the IIV was not significantly different. The location-scale mixed model provides a two-dimensional analysis on the mean and IIV using longitudinal sleep diary data with the potential to reveal insights into treatment mechanisms and outcomes. © 2016 American Academy of Sleep Medicine.

  6. A multi scale model for small scale plasticity

    International Nuclear Information System (INIS)

    Zbib, Hussein M.

    2002-01-01

    Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band

  7. Large-scale model-based assessment of deer-vehicle collision risk.

    Directory of Open Access Journals (Sweden)

    Torsten Hothorn

    Full Text Available Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining

  8. Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS).

    Science.gov (United States)

    Shea, Tracey L; Tennant, Alan; Pallant, Julie F

    2009-05-09

    There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study.

  9. Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS)

    Science.gov (United States)

    Shea, Tracey L; Tennant, Alan; Pallant, Julie F

    2009-01-01

    Background There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. Methods The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. Results To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. Conclusion The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study. PMID:19426512

  10. Small scale models equal large scale savings

    International Nuclear Information System (INIS)

    Lee, R.; Segroves, R.

    1994-01-01

    A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)

  11. Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.

    Directory of Open Access Journals (Sweden)

    Ryan C Williamson

    2016-12-01

    Full Text Available Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction-shared dimensionality and percent shared variance-with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure.

  12. Change Analysis and Decision Tree Based Detection Model for Residential Objects across Multiple Scales

    Directory of Open Access Journals (Sweden)

    CHEN Liyan

    2018-03-01

    Full Text Available Change analysis and detection plays important role in the updating of multi-scale databases.When overlap an updated larger-scale dataset and a to-be-updated smaller-scale dataset,people usually focus on temporal changes caused by the evolution of spatial entities.Little attention is paid to the representation changes influenced by map generalization.Using polygonal building data as an example,this study examines the changes from different perspectives,such as the reasons for their occurrence,their performance format.Based on this knowledge,we employ decision tree in field of machine learning to establish a change detection model.The aim of the proposed model is to distinguish temporal changes that need to be applied as updates to the smaller-scale dataset from representation changes.The proposed method is validated through tests using real-world building data from Guangzhou city.The experimental results show the overall precision of change detection is more than 90%,which indicates our method is effective to identify changed objects.

  13. A general model for the scaling of offspring size and adult size.

    Science.gov (United States)

    Falster, Daniel S; Moles, Angela T; Westoby, Mark

    2008-09-01

    Understanding evolutionary coordination among different life-history traits is a key challenge for ecology and evolution. Here we develop a general quantitative model predicting how offspring size should scale with adult size by combining a simple model for life-history evolution with a frequency-dependent survivorship model. The key innovation is that larger offspring are afforded three different advantages during ontogeny: higher survivorship per time, a shortened juvenile phase, and advantage during size-competitive growth. In this model, it turns out that size-asymmetric advantage during competition is the factor driving evolution toward larger offspring sizes. For simplified and limiting cases, the model is shown to produce the same predictions as the previously existing theory on which it is founded. The explicit treatment of different survival advantages has biologically important new effects, mainly through an interaction between total maternal investment in reproduction and the duration of competitive growth. This goes on to explain alternative allometries between log offspring size and log adult size, as observed in mammals (slope = 0.95) and plants (slope = 0.54). Further, it suggests how these differences relate quantitatively to specific biological processes during recruitment. In these ways, the model generalizes across previous theory and provides explanations for some differences between major taxa.

  14. Modeling of micro-scale thermoacoustics

    Energy Technology Data Exchange (ETDEWEB)

    Offner, Avshalom [The Nancy and Stephen Grand Technion Energy Program, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Department of Civil and Environmental Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Ramon, Guy Z., E-mail: ramong@technion.ac.il [Department of Civil and Environmental Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel)

    2016-05-02

    Thermoacoustic phenomena, that is, onset of self-sustained oscillations or time-averaged fluxes in a sound wave, may be harnessed as efficient and robust heat transfer devices. Specifically, miniaturization of such devices holds great promise for cooling of electronics. At the required small dimensions, it is expected that non-negligible slip effects exist at the solid surface of the “stack”-a porous matrix, which is used for maintaining the correct temporal phasing of the heat transfer between the solid and oscillating gas. Here, we develop theoretical models for thermoacoustic engines and heat pumps that account for slip, within the standing-wave approximation. Stability curves for engines with both no-slip and slip boundary conditions were calculated; the slip boundary condition curve exhibits a lower temperature difference compared with the no slip curve for resonance frequencies that characterize micro-scale devices. Maximum achievable temperature differences across the stack of a heat pump were also calculated. For this case, slip conditions are detrimental and such a heat pump would maintain a lower temperature difference compared to larger devices, where slip effects are negligible.

  15. Modeling of micro-scale thermoacoustics

    International Nuclear Information System (INIS)

    Offner, Avshalom; Ramon, Guy Z.

    2016-01-01

    Thermoacoustic phenomena, that is, onset of self-sustained oscillations or time-averaged fluxes in a sound wave, may be harnessed as efficient and robust heat transfer devices. Specifically, miniaturization of such devices holds great promise for cooling of electronics. At the required small dimensions, it is expected that non-negligible slip effects exist at the solid surface of the “stack”-a porous matrix, which is used for maintaining the correct temporal phasing of the heat transfer between the solid and oscillating gas. Here, we develop theoretical models for thermoacoustic engines and heat pumps that account for slip, within the standing-wave approximation. Stability curves for engines with both no-slip and slip boundary conditions were calculated; the slip boundary condition curve exhibits a lower temperature difference compared with the no slip curve for resonance frequencies that characterize micro-scale devices. Maximum achievable temperature differences across the stack of a heat pump were also calculated. For this case, slip conditions are detrimental and such a heat pump would maintain a lower temperature difference compared to larger devices, where slip effects are negligible.

  16. Quantum-critical scaling of fidelity in 2D pairing models

    Energy Technology Data Exchange (ETDEWEB)

    Adamski, Mariusz, E-mail: mariusz.adamski@ift.uni.wroc.pl [Institute of Theoretical Physics, University of Wrocław, pl. Maksa Borna 9, 50–204, Wrocław (Poland); Jȩdrzejewski, Janusz [Institute of Theoretical Physics, University of Wrocław, pl. Maksa Borna 9, 50–204, Wrocław (Poland); Krokhmalskii, Taras [Institute for Condensed Matter Physics, 1 Svientsitski Street, 79011, Lviv (Ukraine)

    2017-01-15

    The laws of quantum-critical scaling theory of quantum fidelity, dependent on the underlying system dimensionality D, have so far been verified in exactly solvable 1D models, belonging to or equivalent to interacting, quadratic (quasifree), spinless or spinfull, lattice-fermion models. The obtained results are so appealing that in quest for correlation lengths and associated universal critical indices ν, which characterize the divergence of correlation lengths on approaching critical points, one might be inclined to substitute the hard task of determining an asymptotic behavior at large distances of a two-point correlation function by an easier one, of determining the quantum-critical scaling of the quantum fidelity. However, the role of system's dimensionality has been left as an open problem. Our aim in this paper is to fill up this gap, at least partially, by verifying the laws of quantum-critical scaling theory of quantum fidelity in a 2D case. To this end, we study correlation functions and quantum fidelity of 2D exactly solvable models, which are interacting, quasifree, spinfull, lattice-fermion models. The considered 2D models exhibit new, as compared with 1D ones, features: at a given quantum-critical point there exists a multitude of correlation lengths and multiple universal critical indices ν, since these quantities depend on spatial directions, moreover, the indices ν may assume larger values. These facts follow from the obtained by us analytical asymptotic formulae for two-point correlation functions. In such new circumstances we discuss the behavior of quantum fidelity from the perspective of quantum-critical scaling theory. In particular, we are interested in finding out to what extent the quantum fidelity approach may be an alternative to the correlation-function approach in studies of quantum-critical points beyond 1D.

  17. Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS

    Directory of Open Access Journals (Sweden)

    Tennant Alan

    2009-05-01

    Full Text Available Abstract Background There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. Methods The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. Results To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. Conclusion The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study.

  18. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  19. Multi-Scale Models for the Scale Interaction of Organized Tropical Convection

    Science.gov (United States)

    Yang, Qiu

    Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.

  20. Finite element modeling of multilayered structures of fish scales.

    Science.gov (United States)

    Chandler, Mei Qiang; Allison, Paul G; Rodriguez, Rogie I; Moser, Robert D; Kennedy, Alan J

    2014-12-01

    The interlinked fish scales of Atractosteus spatula (alligator gar) and Polypterus senegalus (gray and albino bichir) are effective multilayered armor systems for protecting fish from threats such as aggressive conspecific interactions or predation. Both types of fish scales have multi-layered structures with a harder and stiffer outer layer, and softer and more compliant inner layers. However, there are differences in relative layer thickness, property mismatch between layers, the property gradations and nanostructures in each layer. The fracture paths and patterns of both scales under microindentation loads were different. In this work, finite element models of fish scales of A. spatula and P. senegalus were built to investigate the mechanics of their multi-layered structures under penetration loads. The models simulate a rigid microindenter penetrating the fish scales quasi-statically to understand the observed experimental results. Study results indicate that the different fracture patterns and crack paths observed in the experiments were related to the different stress fields caused by the differences in layer thickness, and spatial distribution of the elastic and plastic properties in the layers, and the differences in interface properties. The parametric studies and experimental results suggest that smaller fish such as P. senegalus may have adopted a thinner outer layer for light-weighting and improved mobility, and meanwhile adopted higher strength and higher modulus at the outer layer, and stronger interface properties to prevent ring cracking and interface cracking, and larger fish such as A. spatula and Arapaima gigas have lower strength and lower modulus at the outer layers and weaker interface properties, but have adopted thicker outer layers to provide adequate protection against ring cracking and interface cracking, possibly because weight is less of a concern relative to the smaller fish such as P. senegalus. Published by Elsevier Ltd.

  1. Dispersal, phenology and predicted abundance of the larger grain ...

    African Journals Online (AJOL)

    The phenology and dispersal of the larger grain borer (LGB) in Africa is described, and comparisons are made between prediction of LGB numbers from laboratory studies and predictions from multiple linear models derived from trapping data in the field. The models were developed in Mexico and Kenya, using ...

  2. Development of a Watershed-Scale Long-Term Hydrologic Impact Assessment Model with the Asymptotic Curve Number Regression Equation

    Directory of Open Access Journals (Sweden)

    Jichul Ryu

    2016-04-01

    Full Text Available In this study, 52 asymptotic Curve Number (CN regression equations were developed for combinations of representative land covers and hydrologic soil groups. In addition, to overcome the limitations of the original Long-term Hydrologic Impact Assessment (L-THIA model when it is applied to larger watersheds, a watershed-scale L-THIA Asymptotic CN (ACN regression equation model (watershed-scale L-THIA ACN model was developed by integrating the asymptotic CN regressions and various modules for direct runoff/baseflow/channel routing. The watershed-scale L-THIA ACN model was applied to four watersheds in South Korea to evaluate the accuracy of its streamflow prediction. The coefficient of determination (R2 and Nash–Sutcliffe Efficiency (NSE values for observed versus simulated streamflows over intervals of eight days were greater than 0.6 for all four of the watersheds. The watershed-scale L-THIA ACN model, including the asymptotic CN regression equation method, can simulate long-term streamflow sufficiently well with the ten parameters that have been added for the characterization of streamflow.

  3. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....

  4. Environmental Impacts of Large Scale Biochar Application Through Spatial Modeling

    Science.gov (United States)

    Huber, I.; Archontoulis, S.

    2017-12-01

    In an effort to study the environmental (emissions, soil quality) and production (yield) impacts of biochar application at regional scales we coupled the APSIM-Biochar model with the pSIMS parallel platform. So far the majority of biochar research has been concentrated on lab to field studies to advance scientific knowledge. Regional scale assessments are highly needed to assist decision making. The overall objective of this simulation study was to identify areas in the USA that have the most gain environmentally from biochar's application, as well as areas which our model predicts a notable yield increase due to the addition of biochar. We present the modifications in both APSIM biochar and pSIMS components that were necessary to facilitate these large scale model runs across several regions in the United States at a resolution of 5 arcminutes. This study uses the AgMERRA global climate data set (1980-2010) and the Global Soil Dataset for Earth Systems modeling as a basis for creating its simulations, as well as local management operations for maize and soybean cropping systems and different biochar application rates. The regional scale simulation analysis is in progress. Preliminary results showed that the model predicts that high quality soils (particularly those common to Iowa cropping systems) do not receive much, if any, production benefit from biochar. However, soils with low soil organic matter ( 0.5%) do get a noteworthy yield increase of around 5-10% in the best cases. We also found N2O emissions to be spatial and temporal specific; increase in some areas and decrease in some other areas due to biochar application. In contrast, we found increases in soil organic carbon and plant available water in all soils (top 30 cm) due to biochar application. The magnitude of these increases (% change from the control) were larger in soil with low organic matter (below 1.5%) and smaller in soils with high organic matter (above 3%) and also dependent on biochar

  5. Scaling of the burning efficiency for multicomponent fuel pool fires

    DEFF Research Database (Denmark)

    van Gelderen, Laurens; Farahani, Hamed Farmahini; Rangwala, Ali S.

    In order to improve the validity of small scale crude oil burning experiments, which seem to underestimate the burning efficiency obtained in larger scales, the gasification mechanism of crude oil was studied. Gasification models obtained from literature were used to make a set of predictions for...... an external heat source to simulate the larger fire size are currently in process....

  6. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    Science.gov (United States)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  7. Multi-scale evaluations of submarine groundwater discharge

    Directory of Open Access Journals (Sweden)

    M. Taniguchi

    2015-03-01

    Full Text Available Multi-scale evaluations of submarine groundwater discharge (SGD have been made in Saijo, Ehime Prefecture, Shikoku Island, Japan, by using seepage meters for point scale, 222Rn tracer for point and coastal scales, and a numerical groundwater model (SEAWAT for coastal and basin scales. Daily basis temporal changes in SGD are evaluated by continuous seepage meter and 222Rn mooring measurements, and depend on sea level changes. Spatial evaluations of SGD were also made by 222Rn along the coast in July 2010 and November 2011. The area with larger 222Rn concentration during both seasons agreed well with the area with larger SGD calculated by 3D groundwater numerical simulations.

  8. Spatial scale separation in regional climate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Feser, F.

    2005-07-01

    In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter

  9. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DEFF Research Database (Denmark)

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.

    2017-01-01

    orders of magnitude. Data values also have greatly varying magnitudes. Standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME......Constraint-Based Reconstruction and Analysis (COBRA) is currently the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many...... models have 70,000 constraints and variables and will grow larger). We have developed a quadrupleprecision version of our linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging...

  10. Ground-water solute transport modeling using a three-dimensional scaled model

    International Nuclear Information System (INIS)

    Crider, S.S.

    1987-01-01

    Scaled models are used extensively in current hydraulic research on sediment transport and solute dispersion in free surface flows (rivers, estuaries), but are neglected in current ground-water model research. Thus, an investigation was conducted to test the efficacy of a three-dimensional scaled model of solute transport in ground water. No previous results from such a model have been reported. Experiments performed on uniform scaled models indicated that some historical problems (e.g., construction and scaling difficulties; disproportionate capillary rise in model) were partly overcome by using simple model materials (sand, cement and water), by restricting model application to selective classes of problems, and by physically controlling the effect of the model capillary zone. Results from these tests were compared with mathematical models. Model scaling laws were derived for ground-water solute transport and used to build a three-dimensional scaled model of a ground-water tritium plume in a prototype aquifer on the Savannah River Plant near Aiken, South Carolina. Model results compared favorably with field data and with a numerical model. Scaled models are recommended as a useful additional tool for prediction of ground-water solute transport

  11. Multi-time-scale heat transfer modeling of turbid tissues exposed to short-pulsed irradiations.

    Science.gov (United States)

    Kim, Kyunghan; Guo, Zhixiong

    2007-05-01

    A combined hyperbolic radiation and conduction heat transfer model is developed to simulate multi-time-scale heat transfer in turbid tissues exposed to short-pulsed irradiations. An initial temperature response of a tissue to an ultrashort pulse irradiation is analyzed by the volume-average method in combination with the transient discrete ordinates method for modeling the ultrafast radiation heat transfer. This response is found to reach pseudo steady state within 1 ns for the considered tissues. The single pulse result is then utilized to obtain the temperature response to pulse train irradiation at the microsecond/millisecond time scales. After that, the temperature field is predicted by the hyperbolic heat conduction model which is solved by the MacCormack's scheme with error terms correction. Finally, the hyperbolic conduction is compared with the traditional parabolic heat diffusion model. It is found that the maximum local temperatures are larger in the hyperbolic prediction than the parabolic prediction. In the modeled dermis tissue, a 7% non-dimensional temperature increase is found. After about 10 thermal relaxation times, thermal waves fade away and the predictions between the hyperbolic and parabolic models are consistent.

  12. Observations of leaf stomatal conductance at the canopy scale: an atmospheric modeling perspective

    International Nuclear Information System (INIS)

    Avissar, R.

    1993-01-01

    Plant stomata play a key role in the redistribution of energy received on vegetated land into sensible and latent heat. As a result, they have a considerable impact on the atmospheric planetary boundary layer, the hydrologic cycle, the climate, and the weather. Current parameterizations of the stomatal mechanism in state-of-the-art atmospheric models are based on empirical relations that are established at the leaf scale between stomatal conductance and environmental conditions. In order to evaluate these parameterizations, an experiment was carried out on a potato field in New Jersey during the summer of 1989. Stomatal conductances were measured within a small homogeneous area in the middle of the potato field and under a relatively broad range of atmospheric conditions. A large variability of stomatal conductances was observed. This variability, which was associated with the variability of micro-environmental and physiological conditions that is found even in a homogeneous canopy, cannot be simulated explicitly on the scale of a single agricultural field and,a fortiori, on the scale of atmospheric models. Furthermore, this variability could not be related to the environmental conditions measured at a height of 2 m above the plant canopy simultaneously with the conductances, reinforcing the concept of scale decoupling suggested by Jarvis and McNaughton (1986) and McNaughton and Jarvis (1991). Thus, for atmospheric modeling purposes, a parameterization of stomatal conductance at the canopy scale using external environmental forcing conditions seems more appropriate than a parameterization based on leaf-scale stomatal conductance, as currently adopted in state-of-the-art atmospheric models. The measured variability was characterized by a lognormal probability density function (pdf) that remained relatively stable during the entire measuring period. These observations support conclusions by McNaughton and Jarvis (1991) that, unlike current parameterizations, a

  13. Observations of leaf stomatal conductance at the canopy scale: an atmospheric modeling perspective

    International Nuclear Information System (INIS)

    Avissar, R.

    1993-01-01

    Plant stomata play a key role in the redistribution of energy received on vegetated land into sensible and latent heat. As a result, they have a considerable impact on the atmospheric planetary boundary layer, the hydrologic cycle, the climate, and the weather. Current parameterizations of the stomatal mechanism in state-of-the-art atmospheric models are based on empirical relations that are established at the leaf scale between stomatal conductance and environmental conditions. In order to evaluate these parameterizations, an experiment was carried out on a potato field in New Jersey during the summer of 1989. Stomatal conductances were measured within a small homogeneous area in the middle of the potato field and under a relatively broad range of atmospheric conditions. A large variability of stomatal conductances was observed. This variability, which was associated with the variability of micro-environmental and physiological conditions that is found even in a homogeneous canopy, cannot be simulated explicitly on the scale of a single agricultural field and, a fortiori, on the scale of atmospheric models. Furthermore, this variability could not be related to the environmental conditions measured at a height of 2 m above the plant canopy simultaneously with the conductances, reinforcing the concept of scale decoupling suggested by Jarvis and McNaughton (1986) and McNaughton and Jarvis (1991). Thus, for atmospheric modeling purposes, a parameterization of stomatal conductance at the canopy scale using external environmental forcing conditions seems more appropriate than a parameterization based on leaf-scale stomatal conductance, as currently adopted in state-of-the-art atmospheric models. The measured variability was characterized by a lognormal probability density function (pdf) that remained relatively stable during the entire measuring period. These observations support conclusions by McNaughton and Jarvis (1991) that, unlike current parameterizations, a

  14. Scaling laws for modeling nuclear reactor systems

    International Nuclear Information System (INIS)

    Nahavandi, A.N.; Castellana, F.S.; Moradkhanian, E.N.

    1979-01-01

    Scale models are used to predict the behavior of nuclear reactor systems during normal and abnormal operation as well as under accident conditions. Three types of scaling procedures are considered: time-reducing, time-preserving volumetric, and time-preserving idealized model/prototype. The necessary relations between the model and the full-scale unit are developed for each scaling type. Based on these relationships, it is shown that scaling procedures can lead to distortion in certain areas that are discussed. It is advised that, depending on the specific unit to be scaled, a suitable procedure be chosen to minimize model-prototype distortion

  15. Scale Modelling of Nocturnal Cooling in Urban Parks

    Science.gov (United States)

    Spronken-Smith, R. A.; Oke, T. R.

    Scale modelling is used to determine the relative contribution of heat transfer processes to the nocturnal cooling of urban parks and the characteristic temporal and spatial variation of surface temperature. Validation is achieved using a hardware model-to-numerical model-to-field observation chain of comparisons. For the calm case, modelling shows that urban-park differences of sky view factor (s) and thermal admittance () are the relevant properties governing the park cool island (PCI) effect. Reduction in sky view factor by buildings and trees decreases the drain of longwave radiation from the surface to the sky. Thus park areas near the perimeter where there may be a line of buildings or trees, or even sites within a park containing tree clumps or individual trees, generally cool less than open areas. The edge effect applies within distances of about 2.2 to 3.5 times the height of the border obstruction, i.e., to have any part of the park cooling at the maximum rate a square park must be at least twice these dimensions in width. Although the central areas of parks larger than this will experience greater cooling they will accumulate a larger volume of cold air that may make it possible for them to initiate a thermal circulation and extend the influence of the park into the surrounding city. Given real world values of s and it seems likely that radiation and conduction play almost equal roles in nocturnal PCI development. Evaporation is not a significant cooling mechanism in the nocturnal calm case but by day it is probably critical in establishing a PCI by sunset. It is likely that conditions that favour PCI by day (tree shade, soil wetness) retard PCI growth at night. The present work, which only deals with PCI growth, cannot predict which type of park will be coolest at night. Complete specification of nocturnal PCI magnitude requires knowledge of the PCI at sunset, and this depends on daytime energetics.

  16. CFD model development and data comparison for thermal-hydraulic analysis of HTO pilot scale reactor

    International Nuclear Information System (INIS)

    Kochan, R.J.; Oh, C.H.

    1995-09-01

    The DOE Hydrothermal Oxidation (HTO) program is validating computational methods for use in scaling up small HTO systems to production scale. As part of that effort, the computational fluid dynamics code FLUENT is being used to calculate the integrated fluid dynamics and chemical reactions in an HTO vessel reactor designed by MODAR, Inc. Previous validation of the code used data from a benchscale reactor. This reports presents the validation of the code using pilotscale (10 times greater throughput than benchscale) data. The model for the pilotscale reactor has been improved based upon the benchscale data by including better fluid thermal properties, a better solution algorithm, addition of external heat transfer, investigation of the effects of turbulent flow, and, although not built into the computer model, a technique for using the calculated adiabatic oxidation temperatures for selecting initial conditions. Thermal results from this model show very good agreement with the limited test data from MODAR Run 920. In addition to the reactor temperatures, flowfield details, including chemical reaction distribution, and simulated salt particle transport were obtained. This model will be very beneficial in designing and evaluating larger commercial scale units. The results of these calculations indicate that for model validation, more accurate boundary conditions need to be measured in future test runs

  17. Burnout of pulverized biomass particles in large scale boiler - Single particle model approach

    Energy Technology Data Exchange (ETDEWEB)

    Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero [VTT Technical Research Centre of Finland, Box 1603, 40101 Jyvaeskylae (Finland); Soerensen, Lasse Holst [ReaTech/ReAddit, Frederiksborgsveij 399, Niels Bohr, DK-4000 Roskilde (Denmark); Clausen, Soennik [Risoe National Laboratory, DK-4000 Roskilde (Denmark); Berg, Mogens [ENERGI E2 A/S, A.C. Meyers Vaenge 9, DK-2450 Copenhagen SV (Denmark)

    2010-05-15

    Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout. (author)

  18. Preferential flow from pore to landscape scales

    Science.gov (United States)

    Koestel, J. K.; Jarvis, N.; Larsbo, M.

    2017-12-01

    In this presentation, we give a brief personal overview of some recent progress in quantifying preferential flow in the vadose zone, based on our own work and those of other researchers. One key challenge is to bridge the gap between the scales at which preferential flow occurs (i.e. pore to Darcy scales) and the scales of interest for management (i.e. fields, catchments, regions). We present results of recent studies that exemplify the potential of 3-D non-invasive imaging techniques to visualize and quantify flow processes at the pore scale. These studies should lead to a better understanding of how the topology of macropore networks control key state variables like matric potential and thus the strength of preferential flow under variable initial and boundary conditions. Extrapolation of this process knowledge to larger scales will remain difficult, since measurement technologies to quantify macropore networks at these larger scales are lacking. Recent work suggests that the application of key concepts from percolation theory could be useful in this context. Investigation of the larger Darcy-scale heterogeneities that generate preferential flow patterns at the soil profile, hillslope and field scales has been facilitated by hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help to parameterize models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  19. More 'altruistic' punishment in larger societies.

    Science.gov (United States)

    Marlowe, Frank W; Berbesque, J Colette

    2008-03-07

    If individuals will cooperate with cooperators, and punish non-cooperators even at a cost to themselves, then this strong reciprocity could minimize the cheating that undermines cooperation. Based upon numerous economic experiments, some have proposed that human cooperation is explained by strong reciprocity and norm enforcement. Second-party punishment is when you punish someone who defected on you; third-party punishment is when you punish someone who defected on someone else. Third-party punishment is an effective way to enforce the norms of strong reciprocity and promote cooperation. Here we present new results that expand on a previous report from a large cross-cultural project. This project has already shown that there is considerable cross-cultural variation in punishment and cooperation. Here we test the hypothesis that population size (and complexity) predicts the level of third-party punishment. Our results show that people in larger, more complex societies engage in significantly more third-party punishment than people in small-scale societies.

  20. Dynamic Arrest in Charged Colloidal Systems Exhibiting Large-Scale Structural Heterogeneities

    International Nuclear Information System (INIS)

    Haro-Perez, C.; Callejas-Fernandez, J.; Hidalgo-Alvarez, R.; Rojas-Ochoa, L. F.; Castaneda-Priego, R.; Quesada-Perez, M.; Trappe, V.

    2009-01-01

    Suspensions of charged liposomes are found to exhibit typical features of strongly repulsive fluid systems at short length scales, while exhibiting structural heterogeneities at larger length scales that are characteristic of attractive systems. We model the static structure factor of these systems using effective pair interaction potentials composed of a long-range attraction and a shorter range repulsion. Our modeling of the static structure yields conditions for dynamically arrested states at larger volume fractions, which we find to agree with the experimentally observed dynamics

  1. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  2. Aespoe modelling task force - experiences of the site specific flow and transport modelling (in detailed and site scale)

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, Gunnar [Chalmers Univ. of Technology, Goeteborg (Sweden); Stroem, A.; Wikberg, P. [Swedish Nuclear Fuel and Waste Management Co. , Stockholm (Sweden)

    1998-09-01

    The Aespoe Task Force on modelling of groundwater flow and transport of solutes was initiated in 1992. The Task Force shall be a forum for the organisations supporting the Aespoe Hard Rock Laboratory Project to interact in the area of conceptual and numerical modelling of groundwater flow and solute transport in fractured rock. Much emphasis is put on building of confidence in the approaches and methods in use for modelling of groundwater flow and nuclide migration in order to demonstrate their use for performance and safety assessment. The modelling work within the Task Force is linked to the experiments performed at the Aespoe Laboratory. As the first Modelling Task, a large scale pumping and tracer experiment called LPT2 was chosen. This was the final part of the characterisation work for the Aespoe site before the construction of the laboratory in 1990. The construction of the Aespoe HRL access tunnel caused an even larger hydraulic disturbance on a much larger scale than that caused by the LPT2 pumping test. This was regarded as an interesting test case for the conceptual and numerical models of the Aespoe site developed during Task No 1, and was chosen as the third Modelling Task. The aim of Task 3 can be seen from two different perspectives. The Aespoe HRL project saw it as a test of their ability to define a conceptual and structural model of the site that can be utilised by independent modelling groups and be transformed to a predictive groundwater flow model. The modelling groups saw it as a means of understanding groundwater flow in a large fractured rock volume and of testing their computational tools. A general conclusion is that Task 3 has served these purposes well. Non-sorbing tracers tests, made as a part of the TRUE-experiments were chosen as the next predictive modelling task. A preliminary comparison between model predictions made by the Aespoe Task Force and the experimental results, shows that most modelling teams predicted breakthrough from

  3. Modelling financial markets with agents competing on different time scales and with different amount of information

    Science.gov (United States)

    Wohlmuth, Johannes; Andersen, Jørgen Vitting

    2006-05-01

    We use agent-based models to study the competition among investors who use trading strategies with different amount of information and with different time scales. We find that mixing agents that trade on the same time scale but with different amount of information has a stabilizing impact on the large and extreme fluctuations of the market. Traders with the most information are found to be more likely to arbitrage traders who use less information in the decision making. On the other hand, introducing investors who act on two different time scales has a destabilizing effect on the large and extreme price movements, increasing the volatility of the market. Closeness in time scale used in the decision making is found to facilitate the creation of local trends. The larger the overlap in commonly shared information the more the traders in a mixed system with different time scales are found to profit from the presence of traders acting at another time scale than themselves.

  4. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.

  5. One-scale supersymmetric inflationary models

    International Nuclear Information System (INIS)

    Bertolami, O.; Ross, G.G.

    1986-01-01

    The reheating phase is studied in a class of supergravity inflationary models involving a two-component hidden sector in which the scale of supersymmetry breaking and the scale generating inflation are related. It is shown that these models have an ''entropy crisis'' in which there is a large entropy release after nucleosynthesis leading to unacceptable low nuclear abundances. (orig.)

  6. Multi-scale damage modelling in a ceramic matrix composite using a finite-element microstructure meshfree methodology

    Science.gov (United States)

    2016-01-01

    The problem of multi-scale modelling of damage development in a SiC ceramic fibre-reinforced SiC matrix ceramic composite tube is addressed, with the objective of demonstrating the ability of the finite-element microstructure meshfree (FEMME) model to introduce important aspects of the microstructure into a larger scale model of the component. These are particularly the location, orientation and geometry of significant porosity and the load-carrying capability and quasi-brittle failure behaviour of the fibre tows. The FEMME model uses finite-element and cellular automata layers, connected by a meshfree layer, to efficiently couple the damage in the microstructure with the strain field at the component level. Comparison is made with experimental observations of damage development in an axially loaded composite tube, studied by X-ray computed tomography and digital volume correlation. Recommendations are made for further development of the model to achieve greater fidelity to the microstructure. This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242308

  7. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...

  8. Representing macropore flow at the catchment scale: a comparative modeling study

    Science.gov (United States)

    Liu, D.; Li, H. Y.; Tian, F.; Leung, L. R.

    2017-12-01

    Macropore flow is an important hydrological process that generally enhances the soil infiltration capacity and velocity of subsurface water. Up till now, macropore flow is mostly simulated with high-resolution models. One possible drawback of this modeling approach is the difficulty to effectively represent the overall typology and connectivity of the macropore networks. We hypothesize that modeling macropore flow directly at the catchment scale may be complementary to the existing modeling strategy and offer some new insights. Tsinghua Representative Elementary Watershed model (THREW model) is a semi-distributed hydrology model, where the fundamental building blocks are representative elementary watersheds (REW) linked by the river channel network. In THREW, all the hydrological processes are described with constitutive relationships established directly at the REW level, i.e., catchment scale. In this study, the constitutive relationship of macropore flow drainage is established as part of THREW. The enhanced THREW model is then applied at two catchments with deep soils but distinct climates, the humid Asu catchment in the Amazon River basin, and the arid Wei catchment in the Yellow River basin. The Asu catchment has an area of 12.43km2 with mean annual precipitation of 2442mm. The larger Wei catchment has an area of 24800km2 but with mean annual precipitation of only 512mm. The rainfall-runoff processes are simulated at a hourly time step from 2002 to 2005 in the Asu catchment and from 2001 to 2012 in the Wei catchment. The role of macropore flow on the catchment hydrology will be analyzed comparatively over the Asu and Wei catchments against the observed streamflow, evapotranspiration and other auxiliary data.

  9. Drift-Scale THC Seepage Model

    International Nuclear Information System (INIS)

    C.R. Bryan

    2005-01-01

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC submodel uses a drift-scale

  10. Multi-scale modeling for sustainable chemical production.

    Science.gov (United States)

    Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J

    2013-09-01

    With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Geometric Scaling in New Combined Hadron-Electron Ring Accelerator Data

    International Nuclear Information System (INIS)

    Zhou Xiao-Jiao; Qi Lian; Kang Lin; Xiang Wen-Chang; Zhou Dai-Cui

    2014-01-01

    We study the geometric scaling in the new combined data of the hadron-electron ring accelerator by using the Golec-Biernat—Wüsthoff model. It is found that the description of the data is improved once the high accurate data are used to determine the model parameters. The value of x 0 extracted from the fit is larger than the one from the previous study, which indicates a larger saturation scale in the new combined data. This makes more data located in the saturation region, and our approach is more reliable. This study lets the saturation model confront such high precision new combined data, and tests geometric scaling with those data. We demonstrate that the data lie on the same curve, which shows the geometric scaling in the new combined data. This outcome seems to support that the gluon saturation would be a relevant mechanism to dominate the parton evolution process in deep inelastic scattering, due to the fact that the geometric scaling results from the gluon saturation mechanism

  12. Scale modelling in LMFBR safety

    International Nuclear Information System (INIS)

    Cagliostro, D.J.; Florence, A.L.; Abrahamson, G.R.

    1979-01-01

    This paper reviews scale modelling techniques used in studying the structural response of LMFBR vessels to HCDA loads. The geometric, material, and dynamic similarity parameters are presented and identified using the methods of dimensional analysis. Complete similarity of the structural response requires that each similarity parameter be the same in the model as in the prototype. The paper then focuses on the methods, limitations, and problems of duplicating these parameters in scale models and mentions an experimental technique for verifying the scaling. Geometric similarity requires that all linear dimensions of the prototype be reduced in proportion to the ratio of a characteristic dimension of the model to that of the prototype. The overall size of the model depends on the structural detail required, the size of instrumentation, and the costs of machining and assemblying the model. Material similarity requires that the ratio of the density, bulk modulus, and constitutive relations for the structure and fluid be the same in the model as in the prototype. A practical choice of a material for the model is one with the same density and stress-strain relationship as the operating temperature. Ni-200 and water are good simulant materials for the 304 SS vessel and the liquid sodium coolant, respectively. Scaling of the strain rate sensitivity and fracture toughness of materials is very difficult, but may not be required if these effects do not influence the structural response of the reactor components. Dynamic similarity requires that the characteristic pressure of a simulant source equal that of the prototype HCDA for geometrically similar volume changes. The energy source is calibrated in the geometry and environment in which it will be used to assure that heat transfer between high temperature loading sources and the coolant simulant and that non-equilibrium effects in two-phase sources are accounted for. For the geometry and flow conitions of interest, the

  13. Global scale groundwater flow model

    Science.gov (United States)

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  14. Separating foliar physiology from morphology reveals the relative roles of vertically structured transpiration factors within red maple crowns and limitations of larger scale models

    Science.gov (United States)

    Bauerle, William L.; Bowden, Joseph D.

    2011-01-01

    A spatially explicit mechanistic model, MAESTRA, was used to separate key parameters affecting transpiration to provide insights into the most influential parameters for accurate predictions of within-crown and within-canopy transpiration. Once validated among Acer rubrum L. genotypes, model responses to different parameterization scenarios were scaled up to stand transpiration (expressed per unit leaf area) to assess how transpiration might be affected by the spatial distribution of foliage properties. For example, when physiological differences were accounted for, differences in leaf width among A. rubrum L. genotypes resulted in a 25% difference in transpiration. An in silico within-canopy sensitivity analysis was conducted over the range of genotype parameter variation observed and under different climate forcing conditions. The analysis revealed that seven of 16 leaf traits had a ≥5% impact on transpiration predictions. Under sparse foliage conditions, comparisons of the present findings with previous studies were in agreement that parameters such as the maximum Rubisco-limited rate of photosynthesis can explain ∼20% of the variability in predicted transpiration. However, the spatial analysis shows how such parameters can decrease or change in importance below the uppermost canopy layer. Alternatively, model sensitivity to leaf width and minimum stomatal conductance was continuous along a vertical canopy depth profile. Foremost, transpiration sensitivity to an observed range of morphological and physiological parameters is examined and the spatial sensitivity of transpiration model predictions to vertical variations in microclimate and foliage density is identified to reduce the uncertainty of current transpiration predictions. PMID:21617246

  15. Predicting habitat suitability for rare plants at local spatial scales using a species distribution model.

    Science.gov (United States)

    Gogol-Prokurat, Melanie

    2011-01-01

    If species distribution models (SDMs) can rank habitat suitability at a local scale, they may be a valuable conservation planning tool for rare, patchily distributed species. This study assessed the ability of Maxent, an SDM reported to be appropriate for modeling rare species, to rank habitat suitability at a local scale for four edaphic endemic rare plants of gabbroic soils in El Dorado County, California, and examined the effects of grain size, spatial extent, and fine-grain environmental predictors on local-scale model accuracy. Models were developed using species occurrence data mapped on public lands and were evaluated using an independent data set of presence and absence locations on surrounding lands, mimicking a typical conservation-planning scenario that prioritizes potential habitat on unsurveyed lands surrounding known occurrences. Maxent produced models that were successful at discriminating between suitable and unsuitable habitat at the local scale for all four species, and predicted habitat suitability values were proportional to likelihood of occurrence or population abundance for three of four species. Unfortunately, models with the best discrimination (i.e., AUC) were not always the most useful for ranking habitat suitability. The use of independent test data showed metrics that were valuable for evaluating which variables and model choices (e.g., grain, extent) to use in guiding habitat prioritization for conservation of these species. A goodness-of-fit test was used to determine whether habitat suitability values ranked habitat suitability on a continuous scale. If they did not, a minimum acceptable error predicted area criterion was used to determine the threshold for classifying habitat as suitable or unsuitable. I found a trade-off between model extent and the use of fine-grain environmental variables: goodness of fit was improved at larger extents, and fine-grain environmental variables improved local-scale accuracy, but fine-grain variables

  16. On scaling of human body models

    Directory of Open Access Journals (Sweden)

    Hynčík L.

    2007-10-01

    Full Text Available Human body is not an unique being, everyone is another from the point of view of anthropometry and mechanical characteristics which means that division of the human body population to categories like 5%-tile, 50%-tile and 95%-tile from the application point of view is not enough. On the other hand, the development of a particular human body model for all of us is not possible. That is why scaling and morphing algorithms has started to be developed. The current work describes the development of a tool for scaling of the human models. The idea is to have one (or couple of standard model(s as a base and to create other models based on these basic models. One has to choose adequate anthropometrical and biomechanical parameters that describe given group of humans to be scaled and morphed among.

  17. Drift-Scale THC Seepage Model

    Energy Technology Data Exchange (ETDEWEB)

    C.R. Bryan

    2005-02-17

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral

  18. Measuring the topology of large-scale structure in the universe

    Science.gov (United States)

    Gott, J. Richard, III

    1988-11-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  19. Measuring the topology of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Gott, J.R. III

    1988-01-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data. 45 references

  20. The cause of larger local magnitude (Mj) in western Japan

    Science.gov (United States)

    Kawamoto, H.; Furumura, T.

    2017-12-01

    The local magnitude of the Japan Meteorological Agency (JMA) scale (Mj) in Japan sometimes show a significant discrepancy between Mw. The Mj is calculated using the amplitude of the horizontal component of ground displacement recorded by seismometers with the natural period of T0=5 s using Katsumata et al. (2004). A typical example of such a discrepancy in estimating Mj was an overestimation of the 2000 Western Tottori earthquake (Mj=7.3, Mw=6.7; hereafter referred to as event T). In this study, we examined the discrepancy between Mj and Mw for recent large earthquakes occurring in Japan.We found that the most earthquakes with larger Mj (>Mw) occur in western Japan while the earthquakes in northern Japan show reasonable Mj (=Mw). To understand the cause of such larger Mj for western Japan earthquakes we examined the strong motion record from the K-NET and KiK-net network for the event T and other earthquakes for reference. The observed ground displacement record from the event T shows a distinctive Love wave packet in tangential motion with a dominant period of about T=5 s which propagates long distances without showing strong dispersions. On the other hand, the ground motions from the earthquakes in northeastern Japan do not have such surface wave packet, and attenuation of ground motion is significant. Therefore, the overestimation of the Mj for earthquakes in western Japan may be attributed to efficient generation and propagation properties of Love wave probably relating to the crustal structure of western Japan. To explain this, we then conducted a numerical simulation of seismic wave propagation using 3D sedimentary layer model (JIVSM; Koketsu et al., 2012) and the source model of the event T. The result demonstrated the efficient generation of Love wave from the shallow strike-slip source which propagates long distances in western Japan without significant dispersions. On the other hand, the generation of surface wave was not so efficient when using a

  1. Scales and scaling in turbulent ocean sciences; physics-biology coupling

    Science.gov (United States)

    Schmitt, Francois

    2015-04-01

    Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.

  2. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  3. Multi-scale Modeling of Arctic Clouds

    Science.gov (United States)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  4. Local-scale models reveal ecological niche variability in amphibian and reptile communities from two contrasting biogeographic regions

    Directory of Open Access Journals (Sweden)

    Alberto Muñoz

    2016-10-01

    Full Text Available Ecological Niche Models (ENMs are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models. Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species and taxonomy (amphibians and reptiles. Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural

  5. Local-scale models reveal ecological niche variability in amphibian and reptile communities from two contrasting biogeographic regions

    Science.gov (United States)

    Santos, Xavier; Felicísimo, Ángel M.

    2016-01-01

    Ecological Niche Models (ENMs) are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models). Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude) were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species) and taxonomy (amphibians and reptiles). Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural parks. PMID

  6. Scale-dependent performances of CMIP5 earth system models in simulating terrestrial vegetation carbon

    Science.gov (United States)

    Jiang, L.; Luo, Y.; Yan, Y.; Hararuk, O.

    2013-12-01

    Mitigation of global changes will depend on reliable projection for the future situation. As the major tools to predict future climate, Earth System Models (ESMs) used in Coupled Model Intercomparison Project Phase 5 (CMIP5) for the IPCC Fifth Assessment Report have incorporated carbon cycle components, which account for the important fluxes of carbon between the ocean, atmosphere, and terrestrial biosphere carbon reservoirs; and therefore are expected to provide more detailed and more certain projections. However, ESMs are never perfect; and evaluating the ESMs can help us to identify uncertainties in prediction and give the priorities for model development. In this study, we benchmarked carbon in live vegetation in the terrestrial ecosystems simulated by 19 ESMs models from CMIP5 with an observationally estimated data set of global carbon vegetation pool 'Olson's Major World Ecosystem Complexes Ranked by Carbon in Live Vegetation: An Updated Database Using the GLC2000 Land Cover Product' by Gibbs (2006). Our aim is to evaluate the ability of ESMs to reproduce the global vegetation carbon pool at different scales and what are the possible causes for the bias. We found that the performance CMIP5 ESMs is very scale-dependent. While CESM1-BGC, CESM1-CAM5, CESM1-FASTCHEM and CESM1-WACCM, and NorESM1-M and NorESM1-ME (they share the same model structure) have very similar global sums with the observation data but they usually perform poorly at grid cell and biome scale. In contrast, MIROC-ESM and MIROC-ESM-CHEM simulate the best on at grid cell and biome scale but have larger differences in global sums than others. Our results will help improve CMIP5 ESMs for more reliable prediction.

  7. More ‘altruistic’ punishment in larger societies

    Science.gov (United States)

    Marlowe, Frank W; Berbesque, J. Colette; Barr, Abigail; Barrett, Clark; Bolyanatz, Alexander; Cardenas, Juan Camilo; Ensminger, Jean; Gurven, Michael; Gwako, Edwins; Henrich, Joseph; Henrich, Natalie; Lesorogol, Carolyn; McElreath, Richard; Tracer, David

    2007-01-01

    If individuals will cooperate with cooperators, and punish non-cooperators even at a cost to themselves, then this strong reciprocity could minimize the cheating that undermines cooperation. Based upon numerous economic experiments, some have proposed that human cooperation is explained by strong reciprocity and norm enforcement. Second-party punishment is when you punish someone who defected on you; third-party punishment is when you punish someone who defected on someone else. Third-party punishment is an effective way to enforce the norms of strong reciprocity and promote cooperation. Here we present new results that expand on a previous report from a large cross-cultural project. This project has already shown that there is considerable cross-cultural variation in punishment and cooperation. Here we test the hypothesis that population size (and complexity) predicts the level of third-party punishment. Our results show that people in larger, more complex societies engage in significantly more third-party punishment than people in small-scale societies. PMID:18089534

  8. An innovative expression model of human health risk based on the quantitative analysis of soil metals sources contribution in different spatial scales.

    Science.gov (United States)

    Zhang, Yimei; Li, Shuai; Wang, Fei; Chen, Zhuang; Chen, Jie; Wang, Liqun

    2018-09-01

    Toxicity of heavy metals from industrialization poses critical concern, and analysis of sources associated with potential human health risks is of unique significance. Assessing human health risk of pollution sources (factored health risk) concurrently in the whole and the sub region can provide more instructive information to protect specific potential victims. In this research, we establish a new expression model of human health risk based on quantitative analysis of sources contribution in different spatial scales. The larger scale grids and their spatial codes are used to initially identify the level of pollution risk, the type of pollution source and the sensitive population at high risk. The smaller scale grids and their spatial codes are used to identify the contribution of various sources of pollution to each sub region (larger grid) and to assess the health risks posed by each source for each sub region. The results of case study show that, for children (sensitive populations, taking school and residential area as major region of activity), the major pollution source is from the abandoned lead-acid battery plant (ALP), traffic emission and agricultural activity. The new models and results of this research present effective spatial information and useful model for quantifying the hazards of source categories and human health a t complex industrial system in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  10. Local Scale Radiobrightness Modeling During the Intensive Observing Period-4 of the Cold Land Processes Experiment-1

    Science.gov (United States)

    Kim, E.; Tedesco, M.; de Roo, R.; England, A. W.; Gu, H.; Pham, H.; Boprie, D.; Graf, T.; Koike, T.; Armstrong, R.; Brodzik, M.; Hardy, J.; Cline, D.

    2004-12-01

    The NASA Cold Land Processes Field Experiment (CLPX-1) was designed to provide microwave remote sensing observations and ground truth for studies of snow and frozen ground remote sensing, particularly issues related to scaling. CLPX-1 was conducted in 2002 and 2003 in Colorado, USA. One of the goals of the experiment was to test the capabilities of microwave emission models at different scales. Initial forward model validation work has concentrated on the Local-Scale Observation Site (LSOS), a 0.8~ha study site consisting of open meadows separated by trees where the most detailed measurements were made of snow depth and temperature, density, and grain size profiles. Results obtained in the case of the 3rd Intensive Observing Period (IOP3) period (February, 2003, dry snow) suggest that a model based on Dense Medium Radiative Transfer (DMRT) theory is able to model the recorded brightness temperatures using snow parameters derived from field measurements. This paper focuses on the ability of forward DMRT modelling, combined with snowpack measurements, to reproduce the radiobrightness signatures observed by the University of Michigan's Truck-Mounted Radiometer System (TMRS) at 19 and 37~GHz during the 4th IOP (IOP4) in March, 2003. Unlike in IOP3, conditions during IOP4 include both wet and dry periods, providing a valuable test of DMRT model performance. In addition, a comparison will be made for the one day of coincident observations by the University of Tokyo's Ground-Based Microwave Radiometer-7 (GBMR-7) and the TMRS. The plot-scale study in this paper establishes a baseline of DMRT performance for later studies at successively larger scales. And these scaling studies will help guide the choice of future snow retrieval algorithms and the design of future Cold Lands observing systems.

  11. Seismic modeling of multidimensional heterogeneity scales of Mallik gas hydrate reservoirs, Northwest Territories of Canada

    Science.gov (United States)

    Huang, Jun-Wei; Bellefleur, Gilles; Milkereit, Bernd

    2009-07-01

    In hydrate-bearing sediments, the velocity and attenuation of compressional and shear waves depend primarily on the spatial distribution of hydrates in the pore space of the subsurface lithologies. Recent characterizations of gas hydrate accumulations based on seismic velocity and attenuation generally assume homogeneous sedimentary layers and neglect effects from large- and small-scale heterogeneities of hydrate-bearing sediments. We present an algorithm, based on stochastic medium theory, to construct heterogeneous multivariable models that mimic heterogeneities of hydrate-bearing sediments at the level of detail provided by borehole logging data. Using this algorithm, we model some key petrophysical properties of gas hydrates within heterogeneous sediments near the Mallik well site, Northwest Territories, Canada. The modeled density, and P and S wave velocities used in combination with a modified Biot-Gassmann theory provide a first-order estimate of the in situ volume of gas hydrate near the Mallik 5L-38 borehole. Our results suggest a range of 528 to 768 × 106 m3/km2 of natural gas trapped within hydrates, nearly an order of magnitude lower than earlier estimates which did not include effects of small-scale heterogeneities. Further, the petrophysical models are combined with a 3-D finite difference modeling algorithm to study seismic attenuation due to scattering and leaky mode propagation. Simulations of a near-offset vertical seismic profile and cross-borehole numerical surveys demonstrate that attenuation of seismic energy may not be directly related to the intrinsic attenuation of hydrate-bearing sediments but, instead, may be largely attributed to scattering from small-scale heterogeneities and highly attenuate leaky mode propagation of seismic waves through larger-scale heterogeneities in sediments.

  12. Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    G. Zyvoloski

    2003-01-01

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca Mountain using FEHM V 2.20 are being

  13. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  14. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM) MODELS

    International Nuclear Information System (INIS)

    Y.S. Wu

    2005-01-01

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on water and gas

  15. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Y.S. Wu

    2005-08-24

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on

  16. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.

    2015-01-01

    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  17. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  18. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    Science.gov (United States)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    , increases weathering and erosion around the headland, and eventually changes the headland into an embayment! Improvements to our modeling approach include refining the initial conditions. To create a fractal, immature rocky coastline, self-similar river networks with random side branches were drawn on the shoreline domain. River networks and side branches were scaled according to Horton's law and Tokunaga statistics, respectively, and each river pathway was assigned a simple exponential longitudinal profile. Topography was generated around the river networks to create drainage basins and, on a larger scale, represent a mountainous, fluvially-sculpted landscape. The resultant morphology was then flooded to a given elevation, leaving a fractal rocky coastline. In addition to the simulated terrain, actual digital elevation models will also be used to derive the initial conditions. Elevation data from different mountainous geomorphic settings such as the decaying Appalachian Mountains or actively uplifting Sierra Nevada can be effectively flooded to a given sea level, resulting in a fractal and immature coastline that can be input to the numerical model. This approach will offer insight into how rocky coastlines in different geomorphic settings evolve, and provide a useful complement to results using the simulated terrain.

  19. A new surface-process model for landscape evolution at a mountain belt scale

    Science.gov (United States)

    Willett, Sean D.; Braun, Jean; Herman, Frederic

    2010-05-01

    We present a new surface process model designed for modeling surface erosion and mass transport at an orogenic scale. Modeling surface processes at a large-scale is difficult because surface geomorphic processes are frequently described at the scale of a few meters, and such resolution cannot be represented in orogen-scale models operating over hundreds of square kilometers. We circumvent this problem by implementing a hybrid numerical -- analytical model. Like many previous models, the model is based on a numerical fluvial network represented by a series of nodes linked by model rivers in a descending network, with fluvial incision and sediment transport defined by laws operating on this network. However we only represent the largest rivers in the landscape by nodes in this model. Low-order rivers and water divides between large rivers are determined from analytical solutions assuming steady-state conditions with respect to the local river channel. The analytical solution includes the same fluvial incision law as the large rivers and a channel head with a specified size and mean slope. This permits a precise representation of the position of water divides between river basins. This is a key characteristic in landscape evolution as divide migration provides a positive feedback between river incision and a consequent increase in drainage area. The analytical solution also provides an explicit criterion for river capture, which occurs once a water divide migrates to its neighboring channel. This algorithm avoids the artificial network organization that often results from meshing and remeshing algorithms in numerical models. We demonstrate the use of this model with several simple examples including uniform uplift of a block, simultaneous uplift and shortening of a block, and a model involving strike slip faulting. We find a strong dependence on initial condition, but also a surprisingly strong dependence on channel head height parameters. Low channel heads, as

  20. On the random cascading model study of anomalous scaling in multiparticle production with continuously diminishing scale

    International Nuclear Information System (INIS)

    Liu Lianshou; Zhang Yang; Wu Yuanfang

    1996-01-01

    The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)

  1. Factors affecting economies of scale in combined sewer systems.

    Science.gov (United States)

    Maurer, Max; Wolfram, Martin; Anja, Herlyn

    2010-01-01

    A generic model is introduced that represents the combined sewer infrastructure of a settlement quantitatively. A catchment area module first calculates the length and size distribution of the required sewer pipes on the basis of rain patterns, housing densities and area size. These results are fed into the sewer-cost module in order to estimate the combined sewer costs of the entire catchment area. A detailed analysis of the relevant input parameters for Swiss settlements is used to identify the influence of size on costs. The simulation results confirm that an economy of scale exists for combined sewer systems. This is the result of two main opposing cost factors: (i) increased construction costs for larger sewer systems due to larger pipes and increased rain runoff in larger settlements, and (ii) lower costs due to higher population and building densities in larger towns. In Switzerland, the more or less organically grown settlement structures and limited land availability emphasise the second factor to show an apparent economy of scale. This modelling approach proved to be a powerful tool for understanding the underlying factors affecting the cost structure for water infrastructures.

  2. Using scale heights derived from bottomside ionograms for modelling the IRI topside profile

    Directory of Open Access Journals (Sweden)

    B. W. Reinisch

    2004-01-01

    Full Text Available Groundbased ionograms measure the Chapman scale height HT at the F2-layer peak that is used to construct the topside profile. After a brief review of the topside model extrapolation technique, comparisons are presented between the modeled profiles with incoherent scatter radar and satellite measurements for the mid latitude and equatorial ionosphere. The total electron content TEC, derived from measurements on satellite beacon signals, is compared with the height-integrated profiles ITEC from the ionograms. Good agreement is found with the ISR profiles and with results using the low altitude TOPEX satellite. The TEC values derived from GPS signal analysis are systematically larger than ITEC. It is suggested to use HT , routinely measured by a large number of Digisondes around the globe, for the construction of the IRI topside electron density profile.

  3. Multi-scale modeling for sustainable chemical production

    DEFF Research Database (Denmark)

    Zhuang, Kai; Bakshi, Bhavik R.; Herrgard, Markus

    2013-01-01

    associated with the development and implementation of a su stainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow......With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes...... models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process...

  4. Qualitatively Modeling solute fate and transport across scales in an agricultural catchment with diverse lithology

    Science.gov (United States)

    Wayman, C. R.; Russo, T. A.; Li, L.; Forsythe, B.; Hoagland, B.

    2017-12-01

    As part of the Susquehanna Shale Hills Critical Zone Observatory (SSHCZO) project, we have collected geochemical and hydrological data from several subcatchments and four monitoring sites on the main stem of Shaver's Creek, in Huntingon county, Pennsylvania. One subcatchment (0.43 km2) is under agricultural land use, and the monitoring locations on the larger Shaver's Creek (up to 163 km2) drain watersheds with 0 to 25% agricultural area. These two scales of investigation, coupled with advances made across the SSHCZO on multiple lithologies allow us to extrapolate from the subcatchment to the larger watershed. We use geochemical surface and groundwater data to estimate the solute and water transport regimes within the catchment, and to show how lithology and land use are major controls on ground and surface water quality. One area of investigation includes the transport of nutrients between interflow and regional groundwater, and how that connectivity may be reflected in local surface waters. Water and nutrient (Nitrogen) isotopes, will be used to better understand the relative contributions of local and regional groundwater and interflow fluxes into nearby streams. Following initial qualitative modeling, multiple hydrologic and nutrient transport models (e.g. SWAT and CYCLES/PIHM) will be evaluated from the subcatchment to large watershed scales. We will evaluate the ability to simulate the contributions of regional groundwater versus local groundwater, and also impacts of agricultural land management on surface water quality. Improving estimations of groundwater contributions to stream discharge will provide insight into how much agricultural development can impact stream quality and nutrient loading.

  5. Large-scale modeling on the fate and transport of polycyclic aromatic hydrocarbons (PAHs) in multimedia over China

    Science.gov (United States)

    Huang, Y.; Liu, M.; Wada, Y.; He, X.; Sun, X.

    2017-12-01

    even larger geographical domains. Keywords: PAHs; Community multi-scale air quality model; Multimedia fate model; Land use

  6. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  7. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  8. A rate-dependent multi-scale crack model for concrete

    NARCIS (Netherlands)

    Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.

    2013-01-01

    A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate

  9. Investigation of Larger Poly(α-Methylstyrene) Mandrels for High Gain Designs Using Microencapsulation

    International Nuclear Information System (INIS)

    Takagi, Masaru; Cook, Robert; McQuillan, Barry; Gibson, Jane; Paguio, Sally

    2004-01-01

    In recent years we have demonstrated that 2-mm-diameter poly(α-methylstyrene) mandrels meeting indirect drive NIF surface symmetry specifications can be produced using microencapsulation methods. Recently higher gain target designs have been introduced that rely on frequency doubled (green) laser energy and require capsules up to 4 mm in diameter, nominally meeting the same surface finish and symmetry requirements as the existing 2-mm-diameter capsule designs. Direct drive on the NIF also requires larger capsules. In order to evaluate whether the current microencapsulation-based mandrel fabrication techniques will adequately scale to these larger capsules, we have explored extending the techniques to 4-mm-diameter capsules. We find that microencapsulated shells meeting NIF symmetry specifications can be produced, the processing changes necessary to accomplish this are presented here

  10. Magnetic hysteresis at the domain scale of a multi-scale material model for magneto-elastic behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Vanoost, D., E-mail: dries.vanoost@kuleuven-kulak.be [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); Steentjes, S. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany); Peuteman, J. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Electrical Energy and Computer Architecture, Heverlee B-3001 (Belgium); Gielen, G. [KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); De Gersem, H. [KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); TU Darmstadt, Institut für Theorie Elektromagnetischer Felder, Darmstadt D-64289 (Germany); Pissoort, D. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); Hameyer, K. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany)

    2016-09-15

    This paper proposes a multi-scale energy-based material model for poly-crystalline materials. Describing the behaviour of poly-crystalline materials at three spatial scales of dominating physical mechanisms allows accounting for the heterogeneity and multi-axiality of the material behaviour. The three spatial scales are the poly-crystalline, grain and domain scale. Together with appropriate scale transitions rules and models for local magnetic behaviour at each scale, the model is able to describe the magneto-elastic behaviour (magnetostriction and hysteresis) at the macroscale, although the data input is merely based on a set of physical constants. Introducing a new energy density function that describes the demagnetisation field, the anhysteretic multi-scale energy-based material model is extended to the hysteretic case. The hysteresis behaviour is included at the domain scale according to the micro-magnetic domain theory while preserving a valid description for the magneto-elastic coupling. The model is verified using existing measurement data for different mechanical stress levels. - Highlights: • A ferromagnetic hysteretic energy-based multi-scale material model is proposed. • The hysteresis is obtained by new proposed hysteresis energy density function. • Avoids tedious parameter identification.

  11. Catchment-Scale Terrain Modelling with Structure-from-Motion Photogrammetry: a replacement for airborne lidar?

    Science.gov (United States)

    Brasington, James; James, Joe; Cook, Simon; Cox, Simon; Lotsari, Eliisa; McColl, Sam; Lehane, Niall; Williams, Richard; Vericat, Damia

    2016-04-01

    In recent years, 3D terrain reconstructions based on Structure-from-Motion photogrammetry have dramatically democratized the availability of high quality topographic data. This approach involves the use of a non-linear bundle adjustment to estimate simultaneously camera position, pose, distortion and 3D model coordinates. In contrast to traditional aerial photogrammetry, the bundle adjustment is typically solved without external constraints and instead ground control is used a posteriori to transform the modelled coordinates to an established datum using a similarity transformation. The limited data requirements, coupled with the ability to self-calibrate compact cameras, has led to a burgeoning of applications using low-cost imagery acquired terrestrially or from low-altitude platforms. To date, most applications have focused on relatively small spatial scales (0.1-5 Ha), where relaxed logistics permit the use of dense ground control networks and high resolution, close-range photography. It is less clear whether this low-cost approach can be successfully upscaled to tackle larger, watershed-scale projects extending over 102-3 km2 where it could offer a competitive alternative to established landscape modelling with airborne lidar. At such scales, compromises over the density of ground control, the speed and height of sensor platform and related image properties are inevitable. In this presentation we provide a systematic assessment of the quality of large-scale SfM terrain products derived for over 80 km2 of the braided Dart River and its catchment in the Southern Alps of NZ. Reference data in the form of airborne and terrestrial lidar are used to quantify the quality of 3D reconstructions derived from helicopter photography and used to establish baseline uncertainty models for geomorphic change detection. Results indicate that camera network design is a key determinant of model quality, and that standard aerial photogrammetric networks based on strips of nadir

  12. Nonpointlike-parton model with asymptotic scaling and with scaling violationat moderate Q2 values

    International Nuclear Information System (INIS)

    Chen, C.K.

    1981-01-01

    A nonpointlike-parton model is formulated on the basis of the assumption of energy-independent total cross sections of partons and the current-algebra sum rules. No specific strong-interaction Lagrangian density is introduced in this approach. This model predicts asymptotic scaling for the inelastic structure functions of nucleons on the one hand and scaling violation at moderate Q 2 values on the other hand. The predicted scaling-violation patterns at moderate Q 2 values are consistent with the observed scaling-violation patterns. A numerical fit of F 2 functions is performed in order to demonstrate that the predicted scaling-violation patterns of this model at moderate Q 2 values fit the data, and to see how the predicted asymptotic scaling behavior sets in at various x values. Explicit analytic forms of F 2 functions are obtained from this numerical fit, and are compared in detail with the analytic forms of F 2 functions obtained from the numerical fit of the quantum-chromodynamics (QCD) parton model. This comparison shows that this nonpointlike-parton model fits the data better than the QCD parton model, especially at large and small x values. Nachtman moments are computed from the F 2 functions of this model and are shown to agree with data well. It is also shown that the two-dimensional plot of the logarithm of a nonsinglet moment versus the logarithm of another such moment is not a good way to distinguish this nonpointlike-parton model from the QCD parton model

  13. Upscaling of Long-Term U9VI) Desorption from Pore Scale Kinetics to Field-Scale Reactive Transport Models

    Energy Technology Data Exchange (ETDEWEB)

    Andy Miller

    2009-01-25

    Environmental systems exhibit a range of complexities which exist at a range of length and mass scales. Within the realm of radionuclide fate and transport, much work has been focused on understanding pore scale processes where complexity can be reduced to a simplified system. In describing larger scale behavior, the results from these simplified systems must be combined to create a theory of the whole. This process can be quite complex, and lead to models which lack transparency. The underlying assumption of this approach is that complex systems will exhibit complex behavior, requiring a complex system of equations to describe behavior. This assumption has never been tested. The goal of the experiments presented is to ask the question: Do increasingly complex systems show increasingly complex behavior? Three experimental tanks at the intermediate scale (Tank 1: 2.4m x 1.2m x 7.6cm, Tank 2: 2.4m x 0.61m x 7.6cm, Tank 3: 2.4m x 0.61m x 0.61m (LxHxW)) have been completed. These tanks were packed with various physical orientations of different particle sizes of a uranium contaminated sediment from a former uranium mill near Naturita, Colorado. Steady state water flow was induced across the tanks using constant head boundaries. Pore water was removed from within the flow domain through sampling ports/wells; effluent samples were also taken. Each sample was analyzed for a variety of analytes relating to the solubility and transport of uranium. Flow fields were characterized using inert tracers and direct measurements of pressure head. The results show that although there is a wide range of chemical variability within the flow domain of the tank, the effluent uranium behavior is simple enough to be described using a variety of conceptual models. Thus, although there is a wide range in variability caused by pore scale behaviors, these behaviors appear to be smoothed out as uranium is transported through the tank. This smoothing of uranium transport behavior transcends

  14. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  15. Comments on intermediate-scale models

    International Nuclear Information System (INIS)

    Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.

    1987-01-01

    Some superstring-inspired models employ intermediate scales m I of gauge symmetry breaking. Such scales should exceed 10 16 GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m I . However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m W ), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m I . We also comment on the danger of baryon and lepton number violation in the effective low-energy theory. (orig.)

  16. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    Science.gov (United States)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    limited in view of the integrations having being done only for 10-day forecasts. Even so, one should note that they are among very few done using forecast as opposed to reanalysis or analysis global driving data. Our results suggest that (1) running the Eta as an RCM no significant loss of large-scale kinetic energy with time seems to be taking place; (2) no disadvantage from using the Eta LBC scheme compared to the relaxation scheme is seen, while enjoying the advantage of the scheme being significantly less demanding than the relaxation given that it needs driver model fields at the outermost domain boundary only; and (3) the Eta RCM skill in forecasting large scales, with no large scale nudging, seems to be just about the same as that of the driver model, or, in the terminology of Castro et al., the Eta RCM does not lose "value of the large scale" which exists in the larger global analyses used for the initial condition and for verification.

  17. Cross-scale intercomparison of climate change impacts simulated by regional and global hydrological models in eleven large river basins

    Energy Technology Data Exchange (ETDEWEB)

    Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Flörke, M.; Huang, S.; Motovilov, Y.; Buda, S.; Yang, T.; Müller, C.; Leng, G.; Tang, Q.; Portmann, F. T.; Hagemann, S.; Gerten, D.; Wada, Y.; Masaki, Y.; Alemayehu, T.; Satoh, Y.; Samaniego, L.

    2017-01-04

    Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.

  18. Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations

    Science.gov (United States)

    Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.

    2011-12-01

    HydroSCOPE (Hydropower Seasonal Concurrent Optimization of Power and the Environment) is a seasonal time-scale tool for scenario analysis and optimization of reservoir-river networks. Developed in MATLAB, HydroSCOPE is an object-oriented model that simulates basin-scale dynamics with an objective of optimizing reservoir operations to maximize revenue from power generation, reliability in the water supply, environmental performance, and flood control. HydroSCOPE is part of a larger toolset that is being developed through a Department of Energy multi-laboratory project. This project's goal is to provide conventional hydropower decision makers with better information to execute their day-ahead and seasonal operations and planning activities by integrating water balance and operational dynamics across a wide range of spatial and temporal scales. This presentation details the modeling approach and functionality of HydroSCOPE. HydroSCOPE consists of a river-reservoir network model and an optimization routine. The river-reservoir network model simulates the heat and water balance of river-reservoir networks for time-scales up to one year. The optimization routine software, DAKOTA (Design Analysis Kit for Optimization and Terascale Applications - dakota.sandia.gov), is seamlessly linked to the network model and is used to optimize daily volumetric releases from the reservoirs to best meet a set of user-defined constraints, such as maximizing revenue while minimizing environmental violations. The network model uses 1-D approximations for both the reservoirs and river reaches and is able to account for surface and sediment heat exchange as well as ice dynamics for both models. The reservoir model also accounts for inflow, density, and withdrawal zone mixing, and diffusive heat exchange. Routing for the river reaches is accomplished using a modified Muskingum-Cunge approach that automatically calculates the internal timestep and sub-reach lengths to match the conditions of

  19. Demonstrating the Uneven Importance of Fine-Scale Forest Structure on Snow Distributions using High Resolution Modeling

    Science.gov (United States)

    Broxton, P. D.; Harpold, A. A.; van Leeuwen, W.; Biederman, J. A.

    2016-12-01

    Quantifying the amount of snow in forested mountainous environments, as well as how it may change due to warming and forest disturbance, is critical given its importance for water supply and ecosystem health. Forest canopies affect snow accumulation and ablation in ways that are difficult to observe and model. Furthermore, fine-scale forest structure can accentuate or diminish the effects of forest-snow interactions. Despite decades of research demonstrating the importance of fine-scale forest structure (e.g. canopy edges and gaps) on snow, we still lack a comprehensive understanding of where and when forest structure has the largest impact on snowpack mass and energy budgets. Here, we use a hyper-resolution (1 meter spatial resolution) mass and energy balance snow model called the Snow Physics and Laser Mapping (SnowPALM) model along with LIDAR-derived forest structure to determine where spatial variability of fine-scale forest structure has the largest influence on large scale mass and energy budgets. SnowPALM was set up and calibrated at sites representing diverse climates in New Mexico, Arizona, and California. Then, we compared simulations at different model resolutions (i.e. 1, 10, and 100 m) to elucidate the effects of including versus not including information about fine scale canopy structure. These experiments were repeated for different prescribed topographies (i.e. flat, 30% slope north, and south-facing) at each site. Higher resolution simulations had more snow at lower canopy cover, with the opposite being true at high canopy cover. Furthermore, there is considerable scatter, indicating that different canopy arrangements can lead to different amounts of snow, even when the overall canopy coverage is the same. This modeling is contributing to the development of a high resolution machine learning algorithm called the Snow Water Artificial Network (SWANN) model to generate predictions of snow distributions over much larger domains, which has implications

  20. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  1. Representing soakaways in a physically distributed urban drainage model – Upscaling individual allotments to an aggregated scale

    DEFF Research Database (Denmark)

    Roldin, Maria Kerstin; Mark, Ole; Kuczera, George

    2012-01-01

    the infiltration rate based on water depth and soil properties for each time step, and controls the removal of water from the urban drainage model. The model is intended to be used to assess the impact of soakaways on urban drainage networks. The model is tested using field data and shown to simulate the behavior......The increased load on urban stormwater systems due to climate change and growing urbanization can be partly alleviated by using soakaways and similar infiltration techniques. However, while soakaways are usually small-scale structures, most urban drainage network models operate on a larger spatial...... of individual soakaways well. Six upscaling methods to aggregate individual soakaway units with varying saturated hydraulic conductivity (K) in the surrounding soil have been investigated. In the upscaled model, the weighted geometric mean hydraulic conductivity of individual allotments is found to provide...

  2. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  3. Land-Atmosphere Coupling in the Multi-Scale Modelling Framework

    Science.gov (United States)

    Kraus, P. M.; Denning, S.

    2015-12-01

    The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced

  4. Transdisciplinary application of the cross-scale resilience model

    Science.gov (United States)

    Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.

    2014-01-01

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.

  5. Topology optimization for nano-scale heat transfer

    DEFF Research Database (Denmark)

    Evgrafov, Anton; Maute, Kurt; Yang, Ronggui

    2009-01-01

    We consider the problem of optimal design of nano-scale heat conducting systems using topology optimization techniques. At such small scales the empirical Fourier's law of heat conduction no longer captures the underlying physical phenomena because the mean-free path of the heat carriers, phonons...... in our case, becomes comparable with, or even larger than, the feature sizes of considered material distributions. A more accurate model at nano-scales is given by kinetic theory, which provides a compromise between the inaccurate Fourier's law and precise, but too computationally expensive, atomistic...

  6. Comments on intermediate-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.

    1987-04-23

    Some superstring-inspired models employ intermediate scales m/sub I/ of gauge symmetry breaking. Such scales should exceed 10/sup 16/ GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m/sub I/. However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m/sub W/), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m/sub I/. We also comment on the danger of baryon and lepton number violation in the effective low-energy theory.

  7. Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications

    DEFF Research Database (Denmark)

    Liu, Yubao; Warner, Tom; Liu, Yuewei

    2011-01-01

    This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center f...

  8. Preparing for Exascale: Towards convection-permitting, global atmospheric simulations with the Model for Prediction Across Scales (MPAS)

    Science.gov (United States)

    Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald

    2017-04-01

    With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.

  9. Modelling rapid subsurface flow at the hillslope scale with explicit representation of preferential flow paths

    Science.gov (United States)

    Wienhöfer, J.; Zehe, E.

    2012-04-01

    produced acceptable matches to the observed behaviour. These setups were selected for long-term simulation, the results of which were compared against water level measurements at two piezometers along the hillslope and the integral discharge response of the spring to reject some non-behavioural model setups and further reduce equifinality. The results of this study indicate that process-based modelling can provide a means to distinguish preferential flow networks on the hillslope scale when complementary measurements to constrain the range of behavioural model setups are available. These models can further be employed as a virtual reality to investigate the characteristics of flow path architectures and explore effective parameterisations for larger scale applications.

  10. Analysis of scaled-factorial-moment data

    International Nuclear Information System (INIS)

    Seibert, D.

    1990-01-01

    We discuss the two standard constructions used in the search for intermittency, the exclusive and inclusive scaled factorial moments. We propose the use of a new scaled factorial moment that reduces to the exclusive moment in the appropriate limit and is free of undesirable multiplicity correlations that are contained in the inclusive moment. We show that there are some similarities among most of the models that have been proposed to explain factorial-moment data, and that these similarities can be used to increase the efficiency of testing these models. We begin by calculating factorial moments from a simple independent-cluster model that assumes only approximate boost invariance of the cluster rapidity distribution and an approximate relation among the moments of the cluster multiplicity distribution. We find two scaling laws that are essentially model independent. The first scaling law relates the moments to each other with a simple formula, indicating that the different factorial moments are not independent. The second scaling law relates samples with different rapidity densities. We find evidence for much larger clusters in heavy-ion data than in light-ion data, indicating possible spatial intermittency in the heavy-ion events

  11. The Scaling LInear Macroweather model (SLIM): using scaling to forecast global scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-03-01

    At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare

  12. Analysis of chromosome aberration data by hybrid-scale models

    International Nuclear Information System (INIS)

    Indrawati, Iwiq; Kumazawa, Shigeru

    2000-02-01

    This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)

  13. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  14. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

    CSIR Research Space (South Africa)

    Engelbrecht, FA

    2011-12-01

    Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...

  15. The role of fragmentation mechanism in large-scale vapor explosions

    International Nuclear Information System (INIS)

    Liu, Jie

    2003-01-01

    A non-equilibrium, multi-phase, multi-component code PROVER-I is developed for propagation phase of vapor explosion. Two fragmentation models are used. The hydrodynamic fragmentation model is the same as Fletcher's one. A new thermal fragmentation model is proposed with three kinds of time scale for modeling instant fragmentation, spontaneous nucleation fragmentation and normal boiling fragmentation. The role of fragmentation mechanisms is investigated by the simulations of the pressure wave propagation and energy conversion ratio of ex-vessel vapor explosion. The spontaneous nucleation fragmentation results in a much higher pressure peak and a larger energy conversion ratio than hydrodynamic fragmentation. The instant fragmentation gives a slightly larger energy conversion ratio than spontaneous nucleation fragmentation, and the normal boiling fragmentation results in a smaller energy conversion ratio. The detailed analysis of the structure of pressure wave makes it clear that thermal detonation exists only under the thermal fragmentation circumstance. The high energy conversion ratio is obtained in a small vapor volume fraction. However, in larger vapor volume fraction conditions, the vapor explosion is weak. In a large-scale vapor explosion, the hydrodynamic fragmentation is essential when the pressure wave becomes strong, so a small energy conversion ratio is expected. (author)

  16. The application of slip length models to larger textures in turbulent flows over superhydrophobic surfaces

    Science.gov (United States)

    Fairhall, Chris; Garcia-Mayoral, Ricardo

    2017-11-01

    We present results from direct numerical simulations of turbulent flows over superhydrophobic surfaces. We assess the validity of simulations where the surface is modelled as homogeneous slip lengths, comparing them to simulations where the surface texture is resolved. Our results show that once the coherent flow induced by the texture is removed from the velocity fields, the remaining flow sees the surface as homogeneous. We then investigate how the overlying turbulence is modified by the presence of surface texture. For small textures, we show that turbulence is shifted closer to the wall due to the presence of slip, but otherwise remains essentially unmodified. For larger textures, the texture interacts with the turbulent lengthscales, thereby modifying the overlying turbulence. We also show that the saturation of the effect of the spanwise slip length (Fukagata et al. 2006, Busse & Sandham 2012, Seo & Mani 2016), which is drag increasing, is caused by the impermeability imposed at the surface. This work was supported by the Engineering and Physical Sciences Research Council.

  17. Upscaling of Long-Term U(VI) Desorption from Pore Scale Kinetics to Field-Scale Reactive Transport Models. Final report

    International Nuclear Information System (INIS)

    Miller, Andy

    2009-01-01

    Environmental systems exhibit a range of complexities which exist at a range of length and mass scales. Within the realm of radionuclide fate and transport, much work has been focused on understanding pore scale processes where complexity can be reduced to a simplified system. In describing larger scale behavior, the results from these simplified systems must be combined to create a theory of the whole. This process can be quite complex, and lead to models which lack transparency. The underlying assumption of this approach is that complex systems will exhibit complex behavior, requiring a complex system of equations to describe behavior. This assumption has never been tested. The goal of the experiments presented is to ask the question: Do increasingly complex systems show increasingly complex behavior? Three experimental tanks at the intermediate scale (Tank 1: 2.4m x 1.2m x 7.6cm, Tank 2: 2.4m x 0.61m x 7.6cm, Tank 3: 2.4m x 0.61m x 0.61m (LxHxW)) have been completed. These tanks were packed with various physical orientations of different particle sizes of a uranium contaminated sediment from a former uranium mill near Naturita, Colorado. Steady state water flow was induced across the tanks using constant head boundaries. Pore water was removed from within the flow domain through sampling ports/wells; effluent samples were also taken. Each sample was analyzed for a variety of analytes relating to the solubility and transport of uranium. Flow fields were characterized using inert tracers and direct measurements of pressure head. The results show that although there is a wide range of chemical variability within the flow domain of the tank, the effluent uranium behavior is simple enough to be described using a variety of conceptual models. Thus, although there is a wide range in variability caused by pore scale behaviors, these behaviors appear to be smoothed out as uranium is transported through the tank. This smoothing of uranium transport behavior transcends

  18. Biointerface dynamics--Multi scale modeling considerations.

    Science.gov (United States)

    Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko

    2015-08-01

    Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  20. Materials and nanosystems : interdisciplinary computational modeling at multiple scales

    International Nuclear Information System (INIS)

    Huber, S.E.

    2014-01-01

    Over the last five decades, computer simulation and numerical modeling have become valuable tools complementing the traditional pillars of science, experiment and theory. In this thesis, several applications of computer-based simulation and modeling shall be explored in order to address problems and open issues in chemical and molecular physics. Attention shall be paid especially to the different degrees of interrelatedness and multiscale-flavor, which may - at least to some extent - be regarded as inherent properties of computational chemistry. In order to do so, a variety of computational methods are used to study features of molecular systems which are of relevance in various branches of science and which correspond to different spatial and/or temporal scales. Proceeding from small to large measures, first, an application in astrochemistry, the investigation of spectroscopic and energetic aspects of carbonic acid isomers shall be discussed. In this respect, very accurate and hence at the same time computationally very demanding electronic structure methods like the coupled-cluster approach are employed. These studies are followed by the discussion of an application in the scope of plasma-wall interaction which is related to nuclear fusion research. There, the interactions of atoms and molecules with graphite surfaces are explored using density functional theory methods. The latter are computationally cheaper than coupled-cluster methods and thus allow the treatment of larger molecular systems, but yield less accuracy and especially reduced error control at the same time. The subsequently presented exploration of surface defects at low-index polar zinc oxide surfaces, which are of interest in materials science and surface science, is another surface science application. The necessity to treat even larger systems of several hundreds of atoms requires the use of approximate density functional theory methods. Thin gold nanowires consisting of several thousands of

  1. Genome-scale biological models for industrial microbial systems.

    Science.gov (United States)

    Xu, Nan; Ye, Chao; Liu, Liming

    2018-04-01

    The primary aims and challenges associated with microbial fermentation include achieving faster cell growth, higher productivity, and more robust production processes. Genome-scale biological models, predicting the formation of an interaction among genetic materials, enzymes, and metabolites, constitute a systematic and comprehensive platform to analyze and optimize the microbial growth and production of biological products. Genome-scale biological models can help optimize microbial growth-associated traits by simulating biomass formation, predicting growth rates, and identifying the requirements for cell growth. With regard to microbial product biosynthesis, genome-scale biological models can be used to design product biosynthetic pathways, accelerate production efficiency, and reduce metabolic side effects, leading to improved production performance. The present review discusses the development of microbial genome-scale biological models since their emergence and emphasizes their pertinent application in improving industrial microbial fermentation of biological products.

  2. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  3. Magnetic Modeling of Inflated Low-mass Stars Using Interior Fields No Larger than ˜10 kG

    Science.gov (United States)

    MacDonald, James; Mullan, D. J.

    2017-11-01

    We have previously reported on models of low-mass stars in which the presence of inflated radii is ascribed to magnetic fields that impede the onset of convection. Some of our magneto-convection models have been criticized because, when they were first reported by Mullan & MacDonald, the deep interior fields were found to be very large (50-100 MG). Such large fields are now known to be untenable. For example, Browning et al. used stability arguments to suggest that interior fields in low-mass stars cannot be larger than ˜1 MG. Moreover, 3D models of turbulent stellar dynamos suggest that fields generated in low-mass interiors may be not much stronger than 10-20 kG. In the present paper, we present magneto-convective models of inflated low-mass stars in which the interior fields are not permitted to be stronger than 10 kG. These models are used to fit empirical data for 15 low-mass stars for which precise masses and radii have been measured. We show that our 10 kG magneto-convective models can replicate the empirical radii and effective temperatures for 14 of the stars. In the case of the remaining star (in the Praesepe cluster), two different solutions have been reported in the literature. We find that one of these solutions can be fitted well with our model using the nominal age of Praesepe (800 Myr). However, the second solution cannot be fitted unless the star’s age is assumed to be much younger (˜150 Myr).

  4. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Science.gov (United States)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  5. Modeling Hydrodynamics on the Wave Group Scale in Topographically Complex Reef Environments

    Science.gov (United States)

    Reyns, J.; Becker, J. M.; Merrifield, M. A.; Roelvink, J. A.

    2016-02-01

    The knowledge of the characteristics of waves and the associated wave-driven currents is important for sediment transport and morphodynamics, nutrient dynamics and larval dispersion within coral reef ecosystems. Reef-lined coasts differ from sandy beaches in that they have a steep offshore slope, that the non-sandy bottom topography is very rough, and that the distance between the point of maximum short wave dissipation and the actual coastline is usually large. At this short wave breakpoint, long waves are released, and these infragravity (IG) scale motions account for the bulk of the water level variance on the reef flat, the lagoon and eventually, the sandy beaches fronting the coast through run-up. These IG energy dominated water level motions are reinforced during extreme events such as cyclones or swells through larger incident band wave heights and low frequency wave resonance on the reef. Recently, a number of hydro(-morpho)dynamic models that have the capability to model these IG waves have successfully been applied to morphologically differing reef environments. One of these models is the XBeach model, which is curvilinear in nature. This poses serious problems when trying to model an entire atoll for example, as it is extremely difficult to build curvilinear grids that are optimal for the simulation of hydrodynamic processes, while maintaining the topology in the grid. One solution to remediate this problem of grid connectivity is the use of unstructured grids. We present an implementation of the wave action balance on the wave group scale with feedback to the flow momentum balance, which is the foundation of XBeach, within the framework of the unstructured Delft3D Flexible Mesh model. The model can be run in stationary as well as in instationary mode, and it can be forced by regular waves, time series or wave spectra. We show how the code is capable of modeling the wave generated flow at a number of topographically complex reef sites and for a number of

  6. New phenomena in the standard no-scale supergravity model

    CERN Document Server

    Kelley, S; Nanopoulos, Dimitri V; Zichichi, Antonino; Kelley, S; Lopez, J L; Nanopoulos, D V; Zichichi, A

    1994-01-01

    We revisit the no-scale mechanism in the context of the simplest no-scale supergravity extension of the Standard Model. This model has the usual five-dimensional parameter space plus an additional parameter \\xi_{3/2}\\equiv m_{3/2}/m_{1/2}. We show how predictions of the model may be extracted over the whole parameter space. A necessary condition for the potential to be stable is {\\rm Str}{\\cal M}^4>0, which is satisfied if \\bf m_{3/2}\\lsim2 m_{\\tilde q}. Order of magnitude calculations reveal a no-lose theorem guaranteeing interesting and potentially observable new phenomena in the neutral scalar sector of the theory which would constitute a ``smoking gun'' of the no-scale mechanism. This new phenomenology is model-independent and divides into three scenarios, depending on the ratio of the weak scale to the vev at the minimum of the no-scale direction. We also calculate the residual vacuum energy at the unification scale (C_0\\, m^4_{3/2}), and find that in typical models one must require C_0>10. Such constrai...

  7. Structural Color Model Based on Surface Morphology of MORPHO Butterfly Wing Scale

    Science.gov (United States)

    Huang, Zhongjia; Cai, Congcong; Wang, Gang; Zhang, Hui; Huttula, Marko; Cao, Wei

    2016-05-01

    Color production through structural coloration is created by micrometer and sub-micrometer surface textures which interfere with visible light. The shiny blue of morpho menelaus is a typical example of structural coloring. Modified from morphology of the morpho scale, a structure of regular windows with two side offsets was constructed on glass substrates. Optical properties of the bioinspired structure were studied through numerical simulations of light scattering. Results show that the structure can generate monochromatic light scattering. Wavelength of scattered light is tunable via changing the spacing between window shelves. Compared to original butterfly model, the modified one possesses larger illumination scopes in azimuthal distributions despite being less in polar directions. Present bionic structure is periodically repeated and is easy to fabricate. It is hoped that the computational materials design work can inspire future experimental realizations of such a structure in photonics applications.

  8. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  9. The ScaLIng Macroweather Model (SLIMM): using scaling to forecast global-scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-09-01

    On scales of ≈ 10 days (the lifetime of planetary-scale structures), there is a drastic transition from high-frequency weather to low-frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; thus, in GCM (general circulation model) macroweather forecasts, the weather is a high-frequency noise. However, neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developed that use empirical data to force the statistics and climate to be realistic so that even a two-parameter model can perform as well as GCMs for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the large stochastic memories that we quantify. Since macroweather temporal (but not spatial) intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the ScaLIng Macroweather Model (SLIMM). SLIMM is based on a stochastic ordinary differential equation, differing from usual linear stochastic models (such as the linear inverse modelling - LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes that there is no low-frequency memory, SLIMM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner, notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful stochastic forecasts of natural macroweather variability is to first remove the low-frequency anthropogenic component. A previous attempt to use fGn for forecasts had disappointing results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent

  10. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2016-05-01

    Full Text Available Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB.In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  11. Site-scale groundwater flow modelling of Aberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1998-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method`s self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the

  12. Site-scale groundwater flow modelling of Aberg

    International Nuclear Information System (INIS)

    Walker, D.; Gylling, B.

    1998-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method's self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the

  13. Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities

    Energy Technology Data Exchange (ETDEWEB)

    None

    2013-03-01

    To accomplish Federal goals for renewable energy, sustainability, and energy security, large-scale renewable energy projects must be developed and constructed on Federal sites at a significant scale with significant private investment. For the purposes of this Guide, large-scale Federal renewable energy projects are defined as renewable energy facilities larger than 10 megawatts (MW) that are sited on Federal property and lands and typically financed and owned by third parties.1 The U.S. Department of Energy’s Federal Energy Management Program (FEMP) helps Federal agencies meet these goals and assists agency personnel navigate the complexities of developing such projects and attract the necessary private capital to complete them. This Guide is intended to provide a general resource that will begin to develop the Federal employee’s awareness and understanding of the project developer’s operating environment and the private sector’s awareness and understanding of the Federal environment. Because the vast majority of the investment that is required to meet the goals for large-scale renewable energy projects will come from the private sector, this Guide has been organized to match Federal processes with typical phases of commercial project development. FEMP collaborated with the National Renewable Energy Laboratory (NREL) and professional project developers on this Guide to ensure that Federal projects have key elements recognizable to private sector developers and investors. The main purpose of this Guide is to provide a project development framework to allow the Federal Government, private developers, and investors to work in a coordinated fashion on large-scale renewable energy projects. The framework includes key elements that describe a successful, financially attractive large-scale renewable energy project. This framework begins the translation between the Federal and private sector operating environments. When viewing the overall

  14. Toward resolving model-measurement discrepancies of radon entry into houses

    International Nuclear Information System (INIS)

    Garbesi, K.; Lawrence Berkeley Lab., CA

    1994-10-01

    Analysis of the literature indicated that radon transport models significantly and consistently underpredict the advective entry into houses of soil-gas borne radon. Advective entry is the dominant mechanism resulting in high concentrations of radon indoors. The author investigated the source of the model-measurement discrepancy via carefully controlled field experiments conducted at an experimental basement located in natural soil in Ben Lomond, California. Early experiments at the structure confirmed the existence and magnitude of the model-measurement discrepancy, ensuring that it was not merely an artifact of inherently complex and poorly understood field sites. The measured soil-gas entry rate during structure depressurization was found to be an order of magnitude larger than predicted by a current three-dimensional numerical model of radon transport. The exact magnitude of the discrepancy depends on whether the arithmetic or geometric mean of the small-scale measurements of permeability is used to estimate the effective permeability of the soil. This factor is a critical empirical input to the model and was determined for the Ben Lomond site in the typical fashion using single-probe static depressurization measurements at multiple locations. The remainder of the dissertation research tests a hypothesis to explain the observed discrepancy: that soil permeability assessed using relatively small-scale probe measurements does not reflect bulk soil permeability for flows that is likely to occur at larger scales of several meters or more in real houses and in the test structure. The idea is that soil heterogeneity is of a nature that, as flows occur over larger scales, larger scales of heterogeneity are encountered that facilitate larger flux rates, resulting in a scale dependence of effective soil permeability

  15. Investigation of the large scale regional hydrogeological situation at Ceberg

    International Nuclear Information System (INIS)

    Boghammar, A.; Grundfelt, B.; Hartley, L.

    1997-11-01

    The present study forms part of the large-scale groundwater flow studies within the SR 97 project. The site of interest is Ceberg. Within the present study two different regional scale groundwater models have been constructed, one large regional model with an areal extent of about 300 km 2 and one semi-regional model with an areal extent of about 50 km 2 . Different types of boundary conditions have been applied to the models. Topography driven pressures, constant infiltration rates, non-linear infiltration combined specified pressure boundary conditions, and transfer of groundwater pressures from the larger model to the semi-regional model. The present model has shown that: -Groundwater flow paths are mainly local. Large-scale groundwater flow paths are only seen below the depth of the hypothetical repository (below 500 meters) and are very slow. -Locations of recharge and discharge, to and from the site area are in the close vicinity of the site. -The low contrast between major structures and the rock mass means that the factor having the major effect on the flowpaths is the topography. -A sufficiently large model, to incorporate the recharge and discharge areas to the local site is in the order of kilometres. -A uniform infiltration rate boundary condition does not give a good representation of the groundwater movements in the model. -A local site model may be located to cover the site area and a few kilometers of the surrounding region. In order to incorporate all recharge and discharge areas within the site model, the model will be somewhat larger than site scale models at other sites. This is caused by the fact that the discharge areas are divided into three distinct areas to the east, south and west of the site. -Boundary conditions may be supplied to the site model by means of transferring groundwater pressures obtained with the semi-regional model

  16. The Multi-Scale Model Approach to Thermohydrology at Yucca Mountain

    International Nuclear Information System (INIS)

    Glascoe, L; Buscheck, T A; Gansemer, J; Sun, Y

    2002-01-01

    The Multi-Scale Thermo-Hydrologic (MSTH) process model is a modeling abstraction of them1 hydrology (TH) of the potential Yucca Mountain repository at multiple spatial scales. The MSTH model as described herein was used for the Supplemental Science and Performance Analyses (BSC, 2001) and is documented in detail in CRWMS M and O (2000) and Glascoe et al. (2002). The model has been validated to a nested grid model in Buscheck et al. (In Review). The MSTH approach is necessary for modeling thermal hydrology at Yucca Mountain for two reasons: (1) varying levels of detail are necessary at different spatial scales to capture important TH processes and (2) a fully-coupled TH model of the repository which includes the necessary spatial detail is computationally prohibitive. The MSTH model consists of six ''submodels'' which are combined in a manner to reduce the complexity of modeling where appropriate. The coupling of these models allows for appropriate consideration of mountain-scale thermal hydrology along with the thermal hydrology of drift-scale discrete waste packages of varying heat load. Two stages are involved in the MSTH approach, first, the execution of submodels, and second, the assembly of submodels using the Multi-scale Thermohydrology Abstraction Code (MSTHAC). MSTHAC assembles the submodels in a five-step process culminating in the TH model output of discrete waste packages including a mountain-scale influence

  17. Scaled Experimental Modeling of VHTR Plenum Flows

    Energy Technology Data Exchange (ETDEWEB)

    ICONE 15

    2007-04-01

    Abstract The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. Various scaled heated gas and water flow facilities were investigated for modeling VHTR upper and lower plenum flows during the decay heat portion of a pressurized conduction-cooldown scenario and for modeling thermal mixing and stratification (“thermal striping”) in the lower plenum during normal operation. It was concluded, based on phenomena scaling and instrumentation and other practical considerations, that a heated water flow scale model facility is preferable to a heated gas flow facility and to unheated facilities which use fluids with ranges of density to simulate the density effect of heating. For a heated water flow lower plenum model, both the Richardson numbers and Reynolds numbers may be approximately matched for conduction-cooldown natural circulation conditions. Thermal mixing during normal operation may be simulated but at lower, but still fully turbulent, Reynolds numbers than in the prototype. Natural circulation flows in the upper plenum may also be simulated in a separate heated water flow facility that uses the same plumbing as the lower plenum model. However, Reynolds number scaling distortions will occur at matching Richardson numbers due primarily to the necessity of using a reduced number of channels connected to the plenum than in the prototype (which has approximately 11,000 core channels connected to the upper plenum) in an otherwise geometrically scaled model. Experiments conducted in either or both facilities will meet the objectives of providing benchmark data for the validation of codes proposed for NGNP designs and safety studies, as well as providing a better understanding of the complex flow phenomena in the plenums.

  18. A statistical forecast model using the time-scale decomposition technique to predict rainfall during flood period over the middle and lower reaches of the Yangtze River Valley

    Science.gov (United States)

    Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao

    2018-04-01

    In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.

  19. Improvement of LCM model and determination of model parameters at watershed scale for flood events in Hongde Basin of China

    Directory of Open Access Journals (Sweden)

    Jun Li

    2017-01-01

    Full Text Available Considering the fact that the original two-parameter LCM model can only be used to investigate rainfall losses during the runoff period because the initial abstraction is not included, the LCM model was redefined as a three-parameter model, including the initial abstraction coefficient λ, the initial abstraction Ia, and the rainfall loss coefficient R. The improved LCM model is superior to the original two-parameter model, which only includes r and R, where r is the initial rainfall loss index and can be calculated with λ using the Soil Conservation Service curve number (SCS-CN method, with r=1/(1+λ. The trial method was used to determine the parameter values of the improved LCM model at the watershed scale for 15 flood events in the Hongde Basin in China. The results show that larger r values are associated with smaller R values, and the parameter R ranges widely from 0.5 to 2.0. In order to improve the practicability of the LCM model, r=0.833 with λ=0.2 is reasonable for simplifying calculation. When the LCM model is applied to arid and semi-arid regions, rainfall without yielding runoff should be deducted from the total rainfall for more accurate estimation of rainfall-runoff.

  20. Influence of Slope-Scale Snowmelt on Catchment Response Simulated With the Alpine3D Model

    Science.gov (United States)

    Brauchli, Tristan; Trujillo, Ernesto; Huwald, Hendrik; Lehning, Michael

    2017-12-01

    Snow and hydrological modeling in alpine environments remains challenging because of the complexity of the processes affecting the mass and energy balance. This study examines the influence of snowmelt on the hydrological response of a high-alpine catchment of 43.2 km2 in the Swiss Alps during the water year 2014-2015. Based on recent advances in Alpine3D, we examine how snow distributions and liquid water transport within the snowpack influence runoff dynamics. By combining these results with multiscale observations (snow lysimeter, distributed snow depths, and streamflow), we demonstrate the added value of a more realistic snow distribution at the onset of melt season. At the site scale, snowpack runoff is well simulated when the mass balance errors are corrected (R2 = 0.95 versus R2 = 0.61). At the subbasin scale, a more heterogeneous snowpack leads to a more rapid runoff pulse originating in the shallower areas while an extended melting period (by a month) is caused by snowmelt from deeper areas. This is a marked improvement over results obtained using a traditional precipitation interpolation method. Hydrological response is also improved by the more realistic snowpack (NSE of 0.85 versus 0.74), even though calibration processes smoothen out the differences. The added value of a more complex liquid water transport scheme is obvious at the site scale but decreases at larger scales. Our results highlight not only the importance but also the difficulty of getting a realistic snowpack distribution even in a well-instrumented area and present a model validation from multiscale experimental data sets.

  1. Larger aftershocks happen farther away: nonseparability of magnitude and spatial distributions of aftershocks

    Science.gov (United States)

    Van Der Elst, Nicholas; Shaw, Bruce E.

    2015-01-01

    Aftershocks may be driven by stress concentrations left by the main shock rupture or by elastic stress transfer to adjacent fault sections or strands. Aftershocks that occur within the initial rupture may be limited in size, because the scale of the stress concentrations should be smaller than the primary rupture itself. On the other hand, aftershocks that occur on adjacent fault segments outside the primary rupture may have no such size limitation. Here we use high-precision double-difference relocated earthquake catalogs to demonstrate that larger aftershocks occur farther away than smaller aftershocks, when measured from the centroid of early aftershock activity—a proxy for the initial rupture. Aftershocks as large as or larger than the initiating event nucleate almost exclusively in the outer regions of the aftershock zone. This observation is interpreted as a signature of elastic rebound in the earthquake catalog and can be used to improve forecasting of large aftershocks.

  2. Biology meets Physics: Reductionism and Multi-scale Modeling of Morphogenesis

    DEFF Research Database (Denmark)

    Green, Sara; Batterman, Robert

    2017-01-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism ...... modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent....... from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom......-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the “tyranny of scales” problem present a challenge to reductive explanations in both physics and biology. The problem refers to the scale...

  3. Toward resolving model-measurement discrepancies of radon entry into houses

    International Nuclear Information System (INIS)

    Garbesi, K.

    1993-01-01

    My dissertation research investigated the source of the model-measurement discrepancy via carefully controlled field experiments conducted at an experimental basement located in natural soil in Ben Lomond, California. Early experiments at the structure (Chapter II) confirmed the existence and magnitude of the model-measurement discrepancy, ensuring that it was not merely an artifact of inherently complex and poorly understood field sites. The measured soil-gas entry rate during structure depressurization was found to be an order of magnitude larger than predicted by a current three-dimensional numerical model of radon transport. The exact magnitude of the discrepancy depends on whether the arithmetic or geometric mean of the small-scale measurements of permeability is used to estimate the effective permeability of the soil. This factor is a critical empirical input to the model and was determined for the Ben Lomond site in the typical fashion using single-probe static depressurization measurement at multiple locations. The remainder of the dissertation research tests a hypothesis to explain the observed discrepancy: that soil permeability assessed using relatively small-scale probe measurements (0.1-0.5 m) does not reflect bulk soil permeability for flows that is likely to occur at larger scales of several meters or more in real houses and in the test structure. The idea is that soil heterogeneity is of a nature that, as flows occur over larger scales, larger scales of heterogeneity are encountered that facilitate larger flux rates, resulting in a scale dependence of effective soil permeability. In Chapter III, I describe the development of a dual-probe dynamic pressure technique to measure soil permeability to air (and anisotropy of permeability) at various length scales. Preliminary field tests of the apparatus indicated that soil permeability was indeed scale dependent

  4. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  5. Scaling laws and technology development strategies for biorefineries and bioenergy plants.

    Science.gov (United States)

    Jack, Michael W

    2009-12-01

    The economies of scale of larger biorefineries or bioenergy plants compete with the diseconomies of scale of transporting geographically distributed biomass to a central location. This results in an optimum plant size that depends on the scaling parameters of the two contributions. This is a fundamental aspect of biorefineries and bioenergy plants and has important consequences for technology development as "bigger is better" is not necessarily true. In this paper we explore the consequences of these scaling effects via a simplified model of biomass transportation and plant costs. Analysis of this model suggests that there is a need for much more sophisticated technology development strategies to exploit the consequences of these scaling effects. We suggest three potential strategies in terms of the scaling parameters of the system.

  6. Properties of Brownian Image Models in Scale-Space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup

    2003-01-01

    Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...

  7. Optimization for steady-state and hybrid operations of ITER by using scaling models of divertor heat load

    International Nuclear Information System (INIS)

    Murakami, Yoshiki; Itami, Kiyoshi; Sugihara, Masayoshi; Fujieda, Hirobumi.

    1992-09-01

    Steady-state and hybrid mode operations of ITER are investigated by 0-D power balance calculations assuming no radiation and charge-exchange cooling in divertor region. Operation points are optimized with respect to divertor heat load which must be reduced to the level of ignition mode (∼5 MW/m 2 ). Dependence of the divertor heat load on the variety of the models, i.e., constant-χ model, Bohm-type-χ model and JT-60U empirical scaling model, is also discussed. The divertor heat load increases linearly with the fusion power (P FUS ) in all models. The possible highest fusion power much differs for each model with an allowable divertor heat load. The heat load evaluated by constant-χ model is, for example, about 1.8 times larger than that by Bohm-type-χ model at P FUS = 750 MW. Effect of reduction of the helium accumulation, improvements of the confinement capability and the current-drive efficiency are also investigated aiming at lowering the divertor heat load. It is found that NBI power should be larger than about 60 MW to obtain a burn time longer than 2000 s. The optimized operation point, where the minimum divertor heat load is achieved, does not depend on the model and is the point with the minimum-P FUS and the maximum-P NBI . When P FUS = 690 MW and P NBI = 110 MW, the divertor heat load can be reduced to the level of ignition mode without impurity seeding if H = 2.2 is achieved. Controllability of the current-profile is also discussed. (J.P.N.)

  8. Logarithmic corrections to scaling in the XY2-model

    International Nuclear Information System (INIS)

    Kenna, R.; Irving, A.C.

    1995-01-01

    We study the distribution of partition function zeroes for the XY-model in two dimensions. In particular we find the scaling behaviour of the end of the distribution of zeroes in the complex external magnetic field plane in the thermodynamic limit (the Yang-Lee edge) and the form for the density of these zeroes. Assuming that finite-size scaling holds, we show that there have to exist logarithmic corrections to the leading scaling behaviour of thermodynamic quantities in this model. These logarithmic corrections are also manifest in the finite-size scaling formulae and we identify them numerically. The method presented here can be used to check the compatibility of scaling behaviour of odd and even thermodynamic functions in other models too. ((orig.))

  9. Multi-scale Modeling of Plasticity in Tantalum.

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weinberger, Christopher [Drexel Univ., Philadelphia, PA (United States)

    2015-12-01

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct

  10. Probabilistic, meso-scale flood loss modelling

    Science.gov (United States)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  11. Two-dimensional divertor modeling and scaling laws

    International Nuclear Information System (INIS)

    Catto, P.J.; Connor, J.W.; Knoll, D.A.

    1996-01-01

    Two-dimensional numerical models of divertors contain large numbers of dimensionless parameters that must be varied to investigate all operating regimes of interest. To simplify the task and gain insight into divertor operation, we employ similarity techniques to investigate whether model systems of equations plus boundary conditions in the steady state admit scaling transformations that lead to useful divertor similarity scaling laws. A short mean free path neutral-plasma model of the divertor region below the x-point is adopted in which all perpendicular transport is due to the neutrals. We illustrate how the results can be used to benchmark large computer simulations by employing a modified version of UEDGE which contains a neutral fluid model. (orig.)

  12. 3-3-1 models at electroweak scale

    International Nuclear Information System (INIS)

    Dias, Alex G.; Montero, J.C.; Pleitez, V.

    2006-01-01

    We show that in 3-3-1 models there exist a natural relation among the SU(3) L coupling constant g, the electroweak mixing angle θ W , the mass of the W, and one of the vacuum expectation values, which implies that those models can be realized at low energy scales and, in particular, even at the electroweak scale. So that, being that symmetries realized in Nature, new physics may be really just around the corner

  13. Validating a continental-scale groundwater diffuse pollution model using regional datasets.

    Science.gov (United States)

    Ouedraogo, Issoufou; Defourny, Pierre; Vanclooster, Marnik

    2017-12-11

    In this study, we assess the validity of an African-scale groundwater pollution model for nitrates. In a previous study, we identified a statistical continental-scale groundwater pollution model for nitrate. The model was identified using a pan-African meta-analysis of available nitrate groundwater pollution studies. The model was implemented in both Random Forest (RF) and multiple regression formats. For both approaches, we collected as predictors a comprehensive GIS database of 13 spatial attributes, related to land use, soil type, hydrogeology, topography, climatology, region typology, nitrogen fertiliser application rate, and population density. In this paper, we validate the continental-scale model of groundwater contamination by using a nitrate measurement dataset from three African countries. We discuss the issue of data availability, and quality and scale issues, as challenges in validation. Notwithstanding that the modelling procedure exhibited very good success using a continental-scale dataset (e.g. R 2  = 0.97 in the RF format using a cross-validation approach), the continental-scale model could not be used without recalibration to predict nitrate pollution at the country scale using regional data. In addition, when recalibrating the model using country-scale datasets, the order of model exploratory factors changes. This suggests that the structure and the parameters of a statistical spatially distributed groundwater degradation model for the African continent are strongly scale dependent.

  14. Hydrodynamic Modelling of Municipal Solid Waste Residues in a Pilot Scale Fluidized Bed Reactor

    Directory of Open Access Journals (Sweden)

    João Cardoso

    2017-11-01

    Full Text Available The present study investigates the hydrodynamics and heat transfer behavior of municipal solid waste (MSW gasification in a pilot scale bubbling fluidized bed reactor. A multiphase 2-D numerical model following an Eulerian-Eulerian approach within the FLUENT framework was implemented. User defined functions (UDFs were coupled to improve hydrodynamics and heat transfer phenomena, and to minimize deviations between the experimental and numerical results. A grid independence study was accomplished through comparison of the bed volume fraction profiles and by reasoning the grid accuracy and computational cost. The standard deviation concept was used to determine the mixing quality indexes. Simulated results showed that UDFs improvements increased the accuracy of the mathematical model. Smaller size ratio of the MSW-dolomite mixture revealed a more uniform mixing, and larger ratios enhanced segregation. Also, increased superficial gas velocity promoted the solid particles mixing. Heat transfer within the fluidized bed showed strong dependence on the MSW solid particles sizes, with smaller particles revealing a more effective process.

  15. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  16. Larger foraminifera of the Devil's Den and Blue Hole sinkholes, Florida

    Science.gov (United States)

    Cotton, Laura J.; Eder, Wolfgang; Floyd, James

    2018-03-01

    Shallow-water carbonate deposits are well-known from the Eocene of the US Gulf Coast and Caribbean. These deposits frequently contain abundant larger benthic foraminifera (LBF). However, whilst integrated stratigraphic studies have helped to refine the timing of LBF overturning events within the Tethys and Indo-Pacific regions with respect to global bio- and chemo-stratigraphic records, little recent work has been carried out in the Americas. The American LBF assemblages are distinctly different from those of Europe and the Indo-Pacific. It is therefore essential that the American bio-province is included in studies of LBF evolution, biodiversity and climate events to understand these processes on a global scale.Here we present the LBF ranges from two previously unpublished sections spanning 35 and 29 m of the upper Eocene Ocala limestone, as the early stages of a larger project addressing the taxonomy and biostratigraphy of the LBF of Florida. The study indicates that the lower member of the Ocala limestone may be Bartonian rather than Priabonian in age, with implications for the biostratigraphy of the region. In addition, the study highlights the need for multiple sites to assess the LBF assemblages and fully constrain ranges across Florida and the US Gulf and suggests potential LBF events for future integrated stratigraphic study.

  17. Analysis, scale modeling, and full-scale tests of low-level nuclear-waste-drum response to accident environments

    International Nuclear Information System (INIS)

    Huerta, M.; Lamoreaux, G.H.; Romesberg, L.E.; Yoshimura, H.R.; Joseph, B.J.; May, R.A.

    1983-01-01

    This report describes extensive full-scale and scale-model testing of 55-gallon drums used for shipping low-level radioactive waste materials. The tests conducted include static crush, single-can impact tests, and side impact tests of eight stacked drums. Static crush forces were measured and crush energies calculated. The tests were performed in full-, quarter-, and eighth-scale with different types of waste materials. The full-scale drums were modeled with standard food product cans. The response of the containers is reported in terms of drum deformations and lid behavior. The results of the scale model tests are correlated to the results of the full-scale drums. Two computer techniques for calculating the response of drum stacks are presented. 83 figures, 9 tables

  18. Scale changes in air quality modelling and assessment of associated uncertainties

    International Nuclear Information System (INIS)

    Korsakissok, Irene

    2009-01-01

    After an introduction of issues related to a scale change in the field of air quality (existing scales for emissions, transport, turbulence and loss processes, hierarchy of data and models, methods of scale change), the author first presents Gaussian models which have been implemented within the Polyphemus modelling platform. These models are assessed by comparison with experimental observations and with other commonly used Gaussian models. The second part reports the coupling of the puff-based Gaussian model with the Eulerian Polair3D model for the sub-mesh processing of point sources. This coupling is assessed at the continental scale for a passive tracer, and at the regional scale for photochemistry. Different statistical methods are assessed

  19. Scaling of musculoskeletal models from static and dynamic trials

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark

    2015-01-01

    Subject-specific scaling of cadaver-based musculoskeletal models is important for accurate musculoskeletal analysis within multiple areas such as ergonomics, orthopaedics and occupational health. We present two procedures to scale ‘generic’ musculoskeletal models to match segment lengths and joint...... three scaling methods to an inverse dynamics-based musculoskeletal model and compared predicted knee joint contact forces to those measured with an instrumented prosthesis during gait. Additionally, a Monte Carlo study was used to investigate the sensitivity of the knee joint contact force to random...

  20. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  1. Holographic models with anisotropic scaling

    Science.gov (United States)

    Brynjolfsson, E. J.; Danielsson, U. H.; Thorlacius, L.; Zingg, T.

    2013-12-01

    We consider gravity duals to d+1 dimensional quantum critical points with anisotropic scaling. The primary motivation comes from strongly correlated electron systems in condensed matter theory but the main focus of the present paper is on the gravity models in their own right. Physics at finite temperature and fixed charge density is described in terms of charged black branes. Some exact solutions are known and can be used to obtain a maximally extended spacetime geometry, which has a null curvature singularity inside a single non-degenerate horizon, but generic black brane solutions in the model can only be obtained numerically. Charged matter gives rise to black branes with hair that are dual to the superconducting phase of a holographic superconductor. Our numerical results indicate that holographic superconductors with anisotropic scaling have vanishing zero temperature entropy when the back reaction of the hair on the brane geometry is taken into account.

  2. Multi-scale mantle structure underneath the Americas from a new tomographic model of seismic shear velocity

    Science.gov (United States)

    Porritt, R. W.; Becker, T. W.; Auer, L.; Boschi, L.

    2017-12-01

    We present a whole-mantle, variable resolution, shear-wave tomography model based on newly available and existing seismological datasets including regional body-wave delay times and multi-mode Rayleigh and Love wave phase delays. Our body wave dataset includes 160,000 S wave delays used in the DNA13 regional tomographic model focused on the western and central US, 86,000 S and SKS delays measured on stations in western South America (Porritt et al., in prep), and 3,900,000 S+ phases measured by correlation between data observed at stations in the IRIS global networks (IU, II) and stations in the continuous US, against synthetic data generated with IRIS Syngine. The surface wave dataset includes fundamental mode and overtone Rayleigh wave data from Schaeffer and Levedev (2014), ambient noise derived Rayleigh wave and Love wave measurements from Ekstrom (2013), newly computed fundamental mode ambient noise Rayleigh wave phase delays for the continuous US up to July 2017, and other, previously published, measurements. These datasets, along with a data-adaptive parameterization utilized for the SAVANI model (Auer et al., 2014), should allow significantly finer-scale imaging than previous global models, rivaling that of regional-scale approaches, under the USArray footprint in the continuous US, while seamlessly integrating into a global model. We parameterize the model for both vertically (vSV) and horizontally (vSH) polarized shear velocities by accounting for the different sensitivities of the various phases and wave types. The resulting, radially anisotropic model should allow for a range of new geodynamic analysis, including estimates of mantle flow induced topography or seismic anisotropy, without generating artifacts due to edge effects, or requiring assumptions about the structure of the region outside the well resolved model space. Our model shows a number of features, including indications of the effects of edge-driven convection in the Cordillera and along

  3. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    Science.gov (United States)

    Nawalany, Marek; Sinicyn, Grzegorz

    2015-09-01

    An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  4. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    Directory of Open Access Journals (Sweden)

    Nawalany Marek

    2015-09-01

    Full Text Available An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i spatial extent and geometry of hydrogeological system, (ii spatial continuity and granularity of both natural and man-made objects within the system, (iii duration of the system and (iv continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scalescale of pores, meso-scalescale of laboratory sample, macro-scalescale of typical blocks in numerical models of groundwater flow, local-scalescale of an aquifer/aquitard and regional-scalescale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here. Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  5. The scaling of maximum and basal metabolic rates of mammals and birds

    Science.gov (United States)

    Barbosa, Lauro A.; Garcia, Guilherme J. M.; da Silva, Jafferson K. L.

    2006-01-01

    Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here, we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as M, maximum heart rate as M, and muscular capillary density as M, in agreement with data.

  6. Spatial ecology across scales.

    Science.gov (United States)

    Hastings, Alan; Petrovskii, Sergei; Morozov, Andrew

    2011-04-23

    The international conference 'Models in population dynamics and ecology 2010: animal movement, dispersal and spatial ecology' took place at the University of Leicester, UK, on 1-3 September 2010, focusing on mathematical approaches to spatial population dynamics and emphasizing cross-scale issues. Exciting new developments in scaling up from individual level movement to descriptions of this movement at the macroscopic level highlighted the importance of mechanistic approaches, with different descriptions at the microscopic level leading to different ecological outcomes. At higher levels of organization, different macroscopic descriptions of movement also led to different properties at the ecosystem and larger scales. New developments from Levy flight descriptions to the incorporation of new methods from physics and elsewhere are revitalizing research in spatial ecology, which will both increase understanding of fundamental ecological processes and lead to tools for better management.

  7. Allometric Scaling and Resource Limitations Model of Total Aboveground Biomass in Forest Stands: Site-scale Test of Model

    Science.gov (United States)

    CHOI, S.; Shi, Y.; Ni, X.; Simard, M.; Myneni, R. B.

    2013-12-01

    Sparseness in in-situ observations has precluded the spatially explicit and accurate mapping of forest biomass. The need for large-scale maps has raised various approaches implementing conjugations between forest biomass and geospatial predictors such as climate, forest type, soil property, and topography. Despite the improved modeling techniques (e.g., machine learning and spatial statistics), a common limitation is that biophysical mechanisms governing tree growth are neglected in these black-box type models. The absence of a priori knowledge may lead to false interpretation of modeled results or unexplainable shifts in outputs due to the inconsistent training samples or study sites. Here, we present a gray-box approach combining known biophysical processes and geospatial predictors through parametric optimizations (inversion of reference measures). Total aboveground biomass in forest stands is estimated by incorporating the Forest Inventory and Analysis (FIA) and Parameter-elevation Regressions on Independent Slopes Model (PRISM). Two main premises of this research are: (a) The Allometric Scaling and Resource Limitations (ASRL) theory can provide a relationship between tree geometry and local resource availability constrained by environmental conditions; and (b) The zeroth order theory (size-frequency distribution) can expand individual tree allometry into total aboveground biomass at the forest stand level. In addition to the FIA estimates, two reference maps from the National Biomass and Carbon Dataset (NBCD) and U.S. Forest Service (USFS) were produced to evaluate the model. This research focuses on a site-scale test of the biomass model to explore the robustness of predictors, and to potentially improve models using additional geospatial predictors such as climatic variables, vegetation indices, soil properties, and lidar-/radar-derived altimetry products (or existing forest canopy height maps). As results, the optimized ASRL estimates satisfactorily

  8. Quantum critical scaling of fidelity in BCS-like model

    International Nuclear Information System (INIS)

    Adamski, Mariusz; Jedrzejewski, Janusz; Krokhmalskii, Taras

    2013-01-01

    We study scaling of the ground-state fidelity in neighborhoods of quantum critical points in a model of interacting spinful fermions—a BCS-like model. Due to the exact diagonalizability of the model, in one and higher dimensions, scaling of the ground-state fidelity can be analyzed numerically with great accuracy, not only for small systems but also for macroscopic ones, together with the crossover region between them. Additionally, in the one-dimensional case we have been able to derive a number of analytical formulas for fidelity and show that they accurately fit our numerical results; these results are reported in the paper. Besides regular critical points and their neighborhoods, where well-known scaling laws are obeyed, there is the multicritical point and critical points in its proximity where anomalous scaling behavior is found. We also consider scaling of fidelity in neighborhoods of critical points where fidelity oscillates strongly as the system size or the chemical potential is varied. Our results for a one-dimensional version of a BCS-like model are compared with those obtained recently by Rams and Damski in similar studies of a quantum spin chain—an anisotropic XY model in a transverse magnetic field. (paper)

  9. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  10. Comparison of Multi-Scale Digital Elevation Models for Defining Waterways and Catchments Over Large Areas

    Science.gov (United States)

    Harris, B.; McDougall, K.; Barry, M.

    2012-07-01

    Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.

  11. Modelling solute dispersion in periodic heterogeneous porous media: Model benchmarking against intermediate scale experiments

    Science.gov (United States)

    Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham

    2018-06-01

    This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.

  12. Degree and connectivity of the Internet's scale-free topology

    International Nuclear Information System (INIS)

    Zhang Lian-Ming; Wu Xiang-Sheng; Deng Xiao-Heng; Yu Jian-Ping

    2011-01-01

    This paper theoretically and empirically studies the degree and connectivity of the Internet's scale-free topology at an autonomous system (AS) level. The basic features of scale-free networks influence the normalization constant of degree distribution p(k). It develops a new mathematic model for describing the power-law relationships of Internet topology. From this model we theoretically obtain formulas to calculate the average degree, the ratios of the k min -degree (minimum degree) nodes and the k max -degree (maximum degree) nodes, and the fraction of the degrees (or links) in the hands of the richer (top best-connected) nodes. It finds that the average degree is larger for a smaller power-law exponent λ and a larger minimum or maximum degree. The ratio of the k min -degree nodes is larger for larger λ and smaller k min or k max . The ratio of the k max -degree ones is larger for smaller λ and k max or larger k min . The richer nodes hold most of the total degrees of Internet AS-level topology. In addition, it is revealed that the increased rate of the average degree or the ratio of the k min -degree nodes has power-law decay with the increase of k min . The ratio of the k max -degree nodes has a power-law decay with the increase of k max , and the fraction of the degrees in the hands of the richer 27% nodes is about 73% (the ‘73/27 rule’). Finally, empirically calculations are made, based on the empirical data extracted from the Border Gateway Protocol, of the average degree, ratio and fraction using this method and other methods, and find that this method is rigorous and effective for Internet AS-level topology. (interdisciplinary physics and related areas of science and technology)

  13. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  14. Impact of small-scale structures on estuarine circulation

    Science.gov (United States)

    Liu, Zhuo; Zhang, Yinglong J.; Wang, Harry V.; Huang, Hai; Wang, Zhengui; Ye, Fei; Sisson, Mac

    2018-05-01

    We present a novel and challenging application of a 3D estuary-shelf model to the study of the collective impact of many small-scale structures (bridge pilings of 1 m × 2 m in size) on larger-scale circulation in a tributary (James River) of Chesapeake Bay. We first demonstrate that the model is capable of effectively transitioning grid resolution from 400 m down to 1 m near the pilings without introducing undue numerical artifact. We then show that despite their small sizes and collectively small area as compared to the total channel cross-sectional area, the pilings exert a noticeable impact on the large-scale circulation, and also create a rich structure of vortices and wakes around the pilings. As a result, the water quality and local sedimentation patterns near the bridge piling area are likely to be affected as well. However, when evaluating over the entire waterbody of the project area, the near field effects are weighed with the areal percentage which is small compared to that for the larger unaffected area, and therefore the impact on the lower James River as a whole becomes relatively insignificant. The study highlights the importance of the use of high resolution in assessing the near-field impact of structures.

  15. Phenomenological aspects of no-scale inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics, King' s College London, WC2R 2LS London (United Kingdom); Garcia, Marcos A.G.; Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V., E-mail: john.ellis@cern.ch, E-mail: garciagarcia@physics.umn.edu, E-mail: dimitri@physics.tamu.edu, E-mail: olive@physics.umn.edu [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, 77843 Texas (United States)

    2015-10-01

    We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0} = B{sub 0} = A{sub 0} = 0, of the CMSSM type with universal A{sub 0} and m{sub 0} ≠ 0 at a high scale, and of the mSUGRA type with A{sub 0} = B{sub 0} + m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2} ≠ 0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.

  16. Phenomenological aspects of no-scale inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics,King’s College London,WC2R 2LS London (United Kingdom); Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Garcia, Marcos A.G. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics andAstronomy, Texas A& M University,College Station, 77843 Texas (United States); Astroparticle Physics Group, Houston Advanced Research Center (HARC), Mitchell Campus, Woodlands, 77381 Texas (United States); Academy of Athens, Division of Natural Sciences, 28 Panepistimiou Avenue, 10679 Athens (Greece); Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States)

    2015-10-01

    We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0}=B{sub 0}=A{sub 0}=0, of the CMSSM type with universal A{sub 0} and m{sub 0}≠0 at a high scale, and of the mSUGRA type with A{sub 0}=B{sub 0}+m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2}≠0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.

  17. Scale effect challenges in urban hydrology highlighted with a distributed hydrological model

    Science.gov (United States)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2018-01-01

    Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration by innovative methods of model resolution alteration based on the spatial data variability and scaling of flows in urban hydrology.

  18. Scaling considerations for modeling the in situ vitrification process

    International Nuclear Information System (INIS)

    Langerman, M.A.; MacKinnon, R.J.

    1990-09-01

    Scaling relationships for modeling the in situ vitrification waste remediation process are documented based upon similarity considerations derived from fundamental principles. Requirements for maintaining temperature and electric potential field similarity between the model and the prototype are determined as well as requirements for maintaining similarity in off-gas generation rates. A scaling rationale for designing reduced-scale experiments is presented and the results are assessed numerically. 9 refs., 6 figs

  19. Nucleon electric dipole moments in high-scale supersymmetric models

    International Nuclear Information System (INIS)

    Hisano, Junji; Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi

    2015-01-01

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  20. Nucleon electric dipole moments in high-scale supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Hisano, Junji [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI),Nagoya University,Nagoya 464-8602 (Japan); Department of Physics, Nagoya University,Nagoya 464-8602 (Japan); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8584 (Japan); Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi [Department of Physics, Nagoya University,Nagoya 464-8602 (Japan)

    2015-11-12

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  1. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  2. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    Science.gov (United States)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the

  3. Scaling, soil moisture and evapotranspiration in runoff models

    Science.gov (United States)

    Wood, Eric F.

    1993-01-01

    The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in the land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, the probability distribution for evaporation is derived which illustrates the conditions for which scaling should work. A correction algorithm that may appropriate for the land parameterization of a GCM is derived using a 2nd order linearization scheme. The performance of the algorithm is evaluated.

  4. A Multi-Scale Perspective of the Effects of Forest Fragmentation on Birds in Eastern Forests

    Science.gov (United States)

    Frank R. Thompson; Therese M. Donovan; Richard M. DeGraff; John Faaborg; Scott K. Robinson

    2002-01-01

    We propose a model that considers forest fragmentation within a spatial hierarchy that includes regional or biogeographic effects, landscape-level fragmentation effects, and local habitat effects. We hypothesize that effects operate "top down" in that larger scale effects provide constraints or context for smaller scale effects. Bird species' abundance...

  5. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  6. Groundwater development stress: Global-scale indices compared to regional modeling

    Science.gov (United States)

    Alley, William; Clark, Brian R.; Ely, Matt; Faunt, Claudia

    2018-01-01

    The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem.

  7. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  8. Estimating landscape-scale impacts of agricultural management on soil carbon using measurements and models

    Science.gov (United States)

    Schipanski, M.; Rosenzweig, S. T.; Robertson, A. D.; Sherrod, L. A.; Ghimire, R.; McMaster, G. S.

    2017-12-01

    Agriculture covers 40% of Earth's ice-free land area and has broad impacts on global biogeochemical cycles. While some agricultural management changes are small in scale or impact, others have the potential to shift biogeochemical cycles at landscape and larger scales if widely adopted. Understanding which management practices have the potential to contribute to climate change adaptation and mitigation while maintaining productivity requires scaling up estimates spatially and temporally. We used on-farm, long-term, and landscape scale datasets to estimate how crop rotations impact soil organic carbon (SOC) accumulation rates under current and future climate scenarios across the semi-arid Central and Southern Great Plains. We used a stratified, landscape-scale soil sampling approach across 96 farm fields to evaluate crop rotation intensity effects on SOC pools and pesticide inputs. Replacing traditional wheat-fallow rotations with more diverse, continuously cropped rotations increased SOC by 17% and 12% in 0-10 cm and 0-20 cm depths, respectively, and reduced herbicide use by 50%. Using USDA Cropland Data Layer, we estimated soil C accumulation and pesticide reduction potentials of shifting to more intensive rotations. We also used a 30-year cropping systems experiment to calibrate and validate the Daycent model to evaluate rotation intensify effects under future climate change scenarios. The model estimated greater SOC accumulation rates under continuously cropped rotations, but SOC stocks peaked and then declined for all cropping systems beyond 2050 under future climate scenarios. Perennial grasslands were the only system estimated to maintain SOC levels in the future. In the Southern High Plains, soil C declined despite increasing input intensity under current weather while modest gains were simulated under future climate for sorghum-based cropping systems. Our findings highlight the potential vulnerability of semi-arid regions to climate change, which will be

  9. [Unfolding item response model using best-worst scaling].

    Science.gov (United States)

    Ikehara, Kazuya

    2015-02-01

    In attitude measurement and sensory tests, the unfolding model is typically used. In this model, response probability is formulated by the distance between the person and the stimulus. In this study, we proposed an unfolding item response model using best-worst scaling (BWU model), in which a person chooses the best and worst stimulus among repeatedly presented subsets of stimuli. We also formulated an unfolding model using best scaling (BU model), and compared the accuracy of estimates between the BU and BWU models. A simulation experiment showed that the BWU modell performed much better than the BU model in terms of bias and root mean square errors of estimates. With reference to Usami (2011), the proposed models were apllied to actual data to measure attitudes toward tardiness. Results indicated high similarity between stimuli estimates generated with the proposed models and those of Usami (2011).

  10. Accounting for small scale heterogeneity in ecohydrologic watershed models

    Science.gov (United States)

    Burke, W.; Tague, C.

    2017-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach

  11. Cosmic microwave background anisotropies in cold dark matter models with cosmological constant: The intermediate versus large angular scales

    Science.gov (United States)

    Stompor, Radoslaw; Gorski, Krzysztof M.

    1994-01-01

    We obtain predictions for cosmic microwave background anisotropies at angular scales near 1 deg in the context of cold dark matter models with a nonzero cosmological constant, normalized to the Cosmic Background Explorer (COBE) Differential Microwave Radiometer (DMR) detection. The results are compared to those computed in the matter-dominated models. We show that the coherence length of the Cosmic Microwave Background (CMB) anisotropy is almost insensitive to cosmological parameters, and the rms amplitude of the anisotropy increases moderately with decreasing total matter density, while being most sensitive to the baryon abundance. We apply these results in the statistical analysis of the published data from the UCSB South Pole (SP) experiment (Gaier et al. 1992; Schuster et al. 1993). We reject most of the Cold Dark Matter (CDM)-Lambda models at the 95% confidence level when both SP scans are simulated together (although the combined data set renders less stringent limits than the Gaier et al. data alone). However, the Schuster et al. data considered alone as well as the results of some other recent experiments (MAX, MSAM, Saskatoon), suggest that typical temperature fluctuations on degree scales may be larger than is indicated by the Gaier et al. scan. If so, CDM-Lambda models may indeed provide, from a point of view of CMB anisotropies, an acceptable alternative to flat CDM models.

  12. Site-scale groundwater flow modelling of Beberg

    International Nuclear Information System (INIS)

    Gylling, B.; Walker, D.; Hartley, L.

    1999-08-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of ε f 10 -4 , and a flow-wetted surface of a r = 1.0 m 2 /(m 3 rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10 -3 m/year. The median F-ratio is 5.6 x 10 5 year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates that the change in hydraulic gradient

  13. Internal variability of fine-scale components of meteorological fields in extended-range limited-area model simulations with atmospheric and surface nudging

    Science.gov (United States)

    Separovic, Leo; Husain, Syed Zahid; Yu, Wei

    2015-09-01

    Internal variability (IV) in dynamical downscaling with limited-area models (LAMs) represents a source of error inherent to the downscaled fields, which originates from the sensitive dependence of the models to arbitrarily small modifications. If IV is large it may impose the need for probabilistic verification of the downscaled information. Atmospheric spectral nudging (ASN) can reduce IV in LAMs as it constrains the large-scale components of LAM fields in the interior of the computational domain and thus prevents any considerable penetration of sensitively dependent deviations into the range of large scales. Using initial condition ensembles, the present study quantifies the impact of ASN on IV in LAM simulations in the range of fine scales that are not controlled by spectral nudging. Four simulation configurations that all include strong ASN but differ in the nudging settings are considered. In the fifth configuration, grid nudging of land surface variables toward high-resolution surface analyses is applied. The results show that the IV at scales larger than 300 km can be suppressed by selecting an appropriate ASN setup. At scales between 300 and 30 km, however, in all configurations, the hourly near-surface temperature, humidity, and winds are only partly reproducible. Nudging the land surface variables is found to have the potential to significantly reduce IV, particularly for fine-scale temperature and humidity. On the other hand, hourly precipitation accumulations at these scales are generally irreproducible in all configurations, and probabilistic approach to downscaling is therefore recommended.

  14. Impact of SCALE-UP on science teaching self-efficacy of students in general education science courses

    Science.gov (United States)

    Cassani, Mary Kay Kuhr

    The objective of this study was to evaluate the effect of two pedagogical models used in general education science on non-majors' science teaching self-efficacy. Science teaching self-efficacy can be influenced by inquiry and cooperative learning, through cognitive mechanisms described by Bandura (1997). The Student Centered Activities for Large Enrollment Undergraduate Programs (SCALE-UP) model of inquiry and cooperative learning incorporates cooperative learning and inquiry-guided learning in large enrollment combined lecture-laboratory classes (Oliver-Hoyo & Beichner, 2004). SCALE-UP was adopted by a small but rapidly growing public university in the southeastern United States in three undergraduate, general education science courses for non-science majors in the Fall 2006 and Spring 2007 semesters. Students in these courses were compared with students in three other general education science courses for non-science majors taught with the standard teaching model at the host university. The standard model combines lecture and laboratory in the same course, with smaller enrollments and utilizes cooperative learning. Science teaching self-efficacy was measured using the Science Teaching Efficacy Belief Instrument - B (STEBI-B; Bleicher, 2004). A science teaching self-efficacy score was computed from the Personal Science Teaching Efficacy (PTSE) factor of the instrument. Using non-parametric statistics, no significant difference was found between teaching models, between genders, within models, among instructors, or among courses. The number of previous science courses was significantly correlated with PTSE score. Student responses to open-ended questions indicated that students felt the larger enrollment in the SCALE-UP room reduced individual teacher attention but that the large round SCALE-UP tables promoted group interaction. Students responded positively to cooperative and hands-on activities, and would encourage inclusion of more such activities in all of the

  15. Multi-scale modeling of dispersed gas-liquid two-phase flow

    NARCIS (Netherlands)

    Deen, N.G.; Sint Annaland, van M.; Kuipers, J.A.M.

    2004-01-01

    In this work the concept of multi-scale modeling is demonstrated. The idea of this approach is to use different levels of modeling, each developed to study phenomena at a certain length scale. Information obtained at the level of small length scales can be used to provide closure information at the

  16. Dynamic subgrid scale model of large eddy simulation of cross bundle flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1996-01-01

    The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  17. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  18. Small-scale modelling of the physiochemical impacts of CO2 leaked from sub-seabed reservoirs or pipelines within the North Sea and surrounding waters.

    Science.gov (United States)

    Dewar, Marius; Wei, Wei; McNeil, David; Chen, Baixin

    2013-08-30

    A two-fluid, small scale numerical ocean model was developed to simulate plume dynamics and increases in water acidity due to leakages of CO2 from potential sub-seabed reservoirs erupting, or pipeline breaching into the North Sea. The location of a leak of such magnitude is unpredictable; therefore, multiple scenarios are modelled with the physiochemical impact measured in terms of the movement and dissolution of the leaked CO2. A correlation for the drag coefficient of bubbles/droplets free rising in seawater is presented and a sub-model to predict the initial bubble/droplet size forming on the seafloor is proposed. With the case studies investigated, the leaked bubbles/droplets fully dissolve before reaching the water surface, where the solution will be dispersed into the larger scale ocean waters. The tools developed can be extended to various locations to model the sudden eruption, which is vital in determining the fate of the CO2 within the local waters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Incorporating Protein Biosynthesis into the Saccharomyces cerevisiae Genome-scale Metabolic Model

    DEFF Research Database (Denmark)

    Olivares Hernandez, Roberto

    Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been construc......Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been...

  20. [Modeling continuous scaling of NDVI based on fractal theory].

    Science.gov (United States)

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  1. Multi-Scale Modelling of Deformation and Fracture in a Biomimetic Apatite-Protein Composite: Molecular-Scale Processes Lead to Resilience at the μm-Scale.

    Directory of Open Access Journals (Sweden)

    Dirk Zahn

    Full Text Available Fracture mechanisms of an enamel-like hydroxyapatite-collagen composite model are elaborated by means of molecular and coarse-grained dynamics simulation. Using fully atomistic models, we uncover molecular-scale plastic deformation and fracture processes initiated at the organic-inorganic interface. Furthermore, coarse-grained models are developed to investigate fracture patterns at the μm-scale. At the meso-scale, micro-fractures are shown to reduce local stress and thus prevent material failure after loading beyond the elastic limit. On the basis of our multi-scale simulation approach, we provide a molecular scale rationalization of this phenomenon, which seems key to the resilience of hierarchical biominerals, including teeth and bone.

  2. Multi-scale modeling in morphogenesis: a critical analysis of the cellular Potts model.

    Directory of Open Access Journals (Sweden)

    Anja Voss-Böhme

    Full Text Available Cellular Potts models (CPMs are used as a modeling framework to elucidate mechanisms of biological development. They allow a spatial resolution below the cellular scale and are applied particularly when problems are studied where multiple spatial and temporal scales are involved. Despite the increasing usage of CPMs in theoretical biology, this model class has received little attention from mathematical theory. To narrow this gap, the CPMs are subjected to a theoretical study here. It is asked to which extent the updating rules establish an appropriate dynamical model of intercellular interactions and what the principal behavior at different time scales characterizes. It is shown that the longtime behavior of a CPM is degenerate in the sense that the cells consecutively die out, independent of the specific interdependence structure that characterizes the model. While CPMs are naturally defined on finite, spatially bounded lattices, possible extensions to spatially unbounded systems are explored to assess to which extent spatio-temporal limit procedures can be applied to describe the emergent behavior at the tissue scale. To elucidate the mechanistic structure of CPMs, the model class is integrated into a general multiscale framework. It is shown that the central role of the surface fluctuations, which subsume several cellular and intercellular factors, entails substantial limitations for a CPM's exploitation both as a mechanistic and as a phenomenological model.

  3. Multiphysics pore-scale model for the rehydration of porous foods

    NARCIS (Netherlands)

    Sman, van der R.G.M.; Vergeldt, F.J.; As, van H.; Dalen, van G.; Voda, A.; Duynhoven, van J.P.M.

    2014-01-01

    In this paper we present a pore-scale model describing the multiphysics occurring during the rehydration of freeze-dried vegetables. This pore-scale model is part of a multiscale simulation model, which should explain the effect of microstructure and pre-treatments on the rehydration rate.

  4. solveME: fast and reliable solution of nonlinear ME models

    DEFF Research Database (Denmark)

    Yang, Laurence; Ma, Ding; Ebrahim, Ali

    2016-01-01

    Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstr......Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic...... reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Results: Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models...

  5. Calibration of the Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    Zyvoloski, G. A.

    2001-01-01

    The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)

  6. A Pareto scale-inflated outlier model and its Bayesian analysis

    OpenAIRE

    Scollnik, David P. M.

    2016-01-01

    This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three wor...

  7. A Lagrangian dynamic subgrid-scale model turbulence

    Science.gov (United States)

    Meneveau, C.; Lund, T. S.; Cabot, W.

    1994-01-01

    A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.

  8. Two-scale modelling for hydro-mechanical damage

    International Nuclear Information System (INIS)

    Frey, J.; Chambon, R.; Dascalu, C.

    2010-01-01

    Document available in extended abstract form only. Excavation works for underground storage create a damage zone for the rock nearby and affect its hydraulics properties. This degradation, already observed by laboratory tests, can create a leading path for fluids. The micro fracture phenomenon, which occur at a smaller scale and affect the rock permeability, must be fully understood to minimize the transfer process. Many methods can be used in order to take into account the microstructure of heterogeneous materials. Among them a method has been developed recently. Instead of using a constitutive equation obtained by phenomenological considerations or by some homogenization techniques, the representative elementary volume (R.E.V.) is modelled as a structure and the links between a prescribed kinematics and the corresponding dual forces are deduced numerically. This yields the so called Finite Element square method (FE2). In a numerical point of view, a finite element model is used at the macroscopic level, and for each Gauss point, computations on the microstructure gives the usual results of a constitutive law. This numerical approach is now classical in order to properly model some materials such as composites and the efficiency of such numerical homogenization process has been shown, and allows numerical modelling of deformation processes associated with various micro-structural changes. The aim of this work is to describe trough such a method, damage of the rock with a two scale hydro-mechanical model. The rock damage at the macroscopic scale is directly link with an analysis on the microstructure. At the macroscopic scale a two phase's problem is studied. A solid skeleton is filled up by a filtrating fluid. It is necessary to enforce two balance equation and two mass conservation equations. A classical way to deal with such a problem is to work with the balance equation of the whole mixture, and the mass fluid conservation written in a weak form, the mass

  9. BLEVE overpressure: multi-scale comparison of blast wave modeling

    International Nuclear Information System (INIS)

    Laboureur, D.; Buchlin, J.M.; Rambaud, P.; Heymes, F.; Lapebie, E.

    2014-01-01

    BLEVE overpressure modeling has been already widely studied but only few validations including the scale effect have been made. After a short overview of the main models available in literature, a comparison is done with different scales of measurements, taken from previous studies or coming from experiments performed in the frame of this research project. A discussion on the best model to use in different cases is finally proposed. (authors)

  10. New time scale based k-epsilon model for near-wall turbulence

    Science.gov (United States)

    Yang, Z.; Shih, T. H.

    1993-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  11. Modeling Flight: The Role of Dynamically Scaled Free-Flight Models in Support of NASA's Aerospace Programs

    Science.gov (United States)

    Chambers, Joseph

    2010-01-01

    The state of the art in aeronautical engineering has been continually accelerated by the development of advanced analysis and design tools. Used in the early design stages for aircraft and spacecraft, these methods have provided a fundamental understanding of physical phenomena and enabled designers to predict and analyze critical characteristics of new vehicles, including the capability to control or modify unsatisfactory behavior. For example, the relatively recent emergence and routine use of extremely powerful digital computer hardware and software has had a major impact on design capabilities and procedures. Sophisticated new airflow measurement and visualization systems permit the analyst to conduct micro- and macro-studies of properties within flow fields on and off the surfaces of models in advanced wind tunnels. Trade studies of the most efficient geometrical shapes for aircraft can be conducted with blazing speed within a broad scope of integrated technical disciplines, and the use of sophisticated piloted simulators in the vehicle development process permits the most important segment of operations the human pilot to make early assessments of the acceptability of the vehicle for its intended mission. Knowledgeable applications of these tools of the trade dramatically reduce risk and redesign, and increase the marketability and safety of new aerospace vehicles. Arguably, one of the more viable and valuable design tools since the advent of flight has been testing of subscale models. As used herein, the term "model" refers to a physical article used in experimental analyses of a larger full-scale vehicle. The reader is probably aware that many other forms of mathematical and computer-based models are also used in aerospace design; however, such topics are beyond the intended scope of this document. Model aircraft have always been a source of fascination, inspiration, and recreation for humans since the earliest days of flight. Within the scientific

  12. Possible Evolution of the Pulsar Braking Index from Larger than Three to About One

    Energy Technology Data Exchange (ETDEWEB)

    Tong, H. [School of Physics and Electronic Engineering, Guangzhou University, 510006 Guangzhou (China); Kou, F. F., E-mail: htong_2005@163.com [Xinjiang Astronomical Observatory, Chinese Academy of Sciences, Urumqi, Xinjiang 830011 (China)

    2017-03-10

    The coupled evolution of pulsar rotation and inclination angle in the wind braking model is calculated. The oblique pulsar tends to align. The pulsar alignment affects its spin-down behavior. As a pulsar evolves from the magneto-dipole radiation dominated case to the particle wind dominated case, the braking index first increases and then decreases. In the early time, the braking index may be larger than three. During the following long time, the braking index is always smaller than three. The minimum braking index is about one. This can explain the existence of a high braking index larger than three and a low braking index simultaneously. The pulsar braking index is expected to evolve from larger than three to about one. The general trend is for the pulsar braking index to evolve from the Crab-like case to the Vela-like case.

  13. Possible Evolution of the Pulsar Braking Index from Larger than Three to About One

    International Nuclear Information System (INIS)

    Tong, H.; Kou, F. F.

    2017-01-01

    The coupled evolution of pulsar rotation and inclination angle in the wind braking model is calculated. The oblique pulsar tends to align. The pulsar alignment affects its spin-down behavior. As a pulsar evolves from the magneto-dipole radiation dominated case to the particle wind dominated case, the braking index first increases and then decreases. In the early time, the braking index may be larger than three. During the following long time, the braking index is always smaller than three. The minimum braking index is about one. This can explain the existence of a high braking index larger than three and a low braking index simultaneously. The pulsar braking index is expected to evolve from larger than three to about one. The general trend is for the pulsar braking index to evolve from the Crab-like case to the Vela-like case.

  14. Development and testing of watershed-scale models for poorly drained soils

    Science.gov (United States)

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2005-01-01

    Watershed-scale hydrology and water quality models were used to evaluate the crrmulative impacts of land use and management practices on dowrzstream hydrology and nitrogen loading of poorly drained watersheds. Field-scale hydrology and nutrient dyyrutmics are predicted by DRAINMOD in both models. In the first model (DRAINMOD-DUFLOW), field-scale predictions are coupled...

  15. Coulomb-gas scaling, superfluid films, and the XY model

    International Nuclear Information System (INIS)

    Minnhagen, P.; Nylen, M.

    1985-01-01

    Coulomb-gas-scaling ideas are invoked as a link between the superfluid density of two-dimensional 4 He films and the XY model; the Coulomb-gas-scaling function epsilon(X) is extracted from experiments and is compared with Monte Carlo simulations of the XY model. The agreement is found to be excellent

  16. COMPARISON OF MULTI-SCALE DIGITAL ELEVATION MODELS FOR DEFINING WATERWAYS AND CATCHMENTS OVER LARGE AREAS

    Directory of Open Access Journals (Sweden)

    B. Harris

    2012-07-01

    Full Text Available Digital Elevation Models (DEMs allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas are adequate for the creation of waterways and catchments at a regional scale.

  17. Model of cosmology and particle physics at an intermediate scale

    International Nuclear Information System (INIS)

    Bastero-Gil, M.; Di Clemente, V.; King, S. F.

    2005-01-01

    We propose a model of cosmology and particle physics in which all relevant scales arise in a natural way from an intermediate string scale. We are led to assign the string scale to the intermediate scale M * ∼10 13 GeV by four independent pieces of physics: electroweak symmetry breaking; the μ parameter; the axion scale; and the neutrino mass scale. The model involves hybrid inflation with the waterfall field N being responsible for generating the μ term, the right-handed neutrino mass scale, and the Peccei-Quinn symmetry breaking scale. The large scale structure of the Universe is generated by the lightest right-handed sneutrino playing the role of a coupled curvaton. We show that the correct curvature perturbations may be successfully generated providing the lightest right-handed neutrino is weakly coupled in the seesaw mechanism, consistent with sequential dominance

  18. Geometrical scaling vs factorizable eikonal models

    CERN Document Server

    Kiang, D

    1975-01-01

    Among various theoretical explanations or interpretations for the experimental data on the differential cross-sections of elastic proton-proton scattering at CERN ISR, the following two seem to be most remarkable: A) the excellent agreement of the Chou-Yang model prediction of d sigma /dt with data at square root s=53 GeV, B) the general manifestation of geometrical scaling (GS). The paper confronts GS with eikonal models with factorizable opaqueness, with special emphasis on the Chou-Yang model. (12 refs).

  19. Multi-scale modeling of inter-granular fracture in UO2

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tonks, Michael R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, S. Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    A hierarchical multi-scale approach is pursued in this work to investigate the influence of porosity, pore and grain size on the intergranular brittle fracture in UO2. In this approach, molecular dynamics simulations are performed to obtain the fracture properties for different grain boundary types. A phase-field model is then utilized to perform intergranular fracture simulations of representative microstructures with different porosities, pore and grain sizes. In these simulations the grain boundary fracture properties obtained from molecular dynamics simulations are used. The responses from the phase-field fracture simulations are then fitted with a stress-based brittle fracture model usable at the engineering scale. This approach encapsulates three different length and time scales, and allows the development of microstructurally informed engineering scale model from properties evaluated at the atomistic scale.

  20. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  1. Classically scale-invariant B–L model and conformal gravity

    International Nuclear Information System (INIS)

    Oda, Ichiro

    2013-01-01

    We consider a coupling of conformal gravity to the classically scale-invariant B–L extended standard model which has been recently proposed as a phenomenologically viable model realizing the Coleman–Weinberg mechanism of breakdown of the electroweak symmetry. As in a globally scale-invariant dilaton gravity, it is also shown in a locally scale-invariant conformal gravity that without recourse to the Coleman–Weinberg mechanism, the B–L gauge symmetry is broken in the process of spontaneous symmetry breakdown of the local scale invariance (Weyl invariance) at the tree level and as a result the B–L gauge field becomes massive via the Higgs mechanism. As a bonus of conformal gravity, the massless dilaton field does not appear and the parameters in front of the non-minimal coupling of gravity are completely fixed in the present model. This observation clearly shows that the conformal gravity has a practical application even if the scalar field does not possess any dynamical degree of freedom owing to the local scale symmetry

  2. Scaling limit for the Derezi\\'nski-G\\'erard model

    OpenAIRE

    OHKUBO, Atsushi

    2010-01-01

    We consider a scaling limit for the Derezi\\'nski-G\\'erard model. We derive an effective potential by taking a scaling limit for the total Hamiltonian of the Derezi\\'nski-G\\'erard model. Our method to derive an effective potential is independent of whether or not the quantum field has a nonnegative mass. As an application of our theory developed in the present paper, we derive an effective potential of the Nelson model.

  3. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    Science.gov (United States)

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  4. Scaling a Convection-Resolving RCM to Near-Global Scales

    Science.gov (United States)

    Leutwyler, D.; Fuhrer, O.; Chadha, T.; Kwasniewski, G.; Hoefler, T.; Lapillonne, X.; Lüthi, D.; Osuna, C.; Schar, C.; Schulthess, T. C.; Vogt, H.

    2017-12-01

    In the recent years, first decade-long kilometer-scale resolution RCM simulations have been performed on continental-scale computational domains. However, the size of the planet Earth is still an order of magnitude larger and thus the computational implications of performing global climate simulations at this resolution are challenging. We explore the gap between the currently established RCM simulations and global simulations by scaling the GPU accelerated version of the COSMO model to a near-global computational domain. To this end, the evolution of an idealized moist baroclinic wave has been simulated over the course of 10 days with a grid spacing of up to 930 m. The computational mesh employs 36'000 x 16'001 x 60 grid points and covers 98.4% of the planet's surface. The code shows perfect weak scaling up to 4'888 Nodes of the Piz Daint supercomputer and yields 0.043 simulated years per day (SYPD) which is approximately one seventh of the 0.2-0.3 SYPD required to conduct AMIP-type simulations. However, at half the resolution (1.9 km) we've observed 0.23 SYPD. Besides formation of frontal precipitating systems containing embedded explicitly-resolved convective motions, the simulations reveal a secondary instability that leads to cut-off warm-core cyclonic vortices in the cyclone's core, once the grid spacing is refined to the kilometer scale. The explicit representation of embedded moist convection and the representation of the previously unresolved instabilities exhibit a physically different behavior in comparison to coarser-resolution simulations. The study demonstrates that global climate simulations using kilometer-scale resolution are imminent and serves as a baseline benchmark for global climate model applications and future exascale supercomputing systems.

  5. Site-scale groundwater flow modelling of Beberg

    Energy Technology Data Exchange (ETDEWEB)

    Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden); Walker, D. [Duke Engineering and Services (United States); Hartley, L. [AEA Technology, Harwell (United Kingdom)

    1999-08-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of {epsilon}{sub f} 10{sup -4}, and a flow-wetted surface of a{sub r} = 1.0 m{sup 2}/(m{sup 3} rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10{sup -3} m/year. The median F-ratio is 5.6 x 10{sup 5} year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates

  6. Optogenetic stimulation of a meso-scale human cortical model

    Science.gov (United States)

    Selvaraj, Prashanth; Szeri, Andrew; Sleigh, Jamie; Kirsch, Heidi

    2015-03-01

    Neurological phenomena like sleep and seizures depend not only on the activity of individual neurons, but on the dynamics of neuron populations as well. Meso-scale models of cortical activity provide a means to study neural dynamics at the level of neuron populations. Additionally, they offer a safe and economical way to test the effects and efficacy of stimulation techniques on the dynamics of the cortex. Here, we use a physiologically relevant meso-scale model of the cortex to study the hypersynchronous activity of neuron populations during epileptic seizures. The model consists of a set of stochastic, highly non-linear partial differential equations. Next, we use optogenetic stimulation to control seizures in a hyperexcited cortex, and to induce seizures in a normally functioning cortex. The high spatial and temporal resolution this method offers makes a strong case for the use of optogenetics in treating meso scale cortical disorders such as epileptic seizures. We use bifurcation analysis to investigate the effect of optogenetic stimulation in the meso scale model, and its efficacy in suppressing the non-linear dynamics of seizures.

  7. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Report is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC

  8. a Model Study of Small-Scale World Map Generalization

    Science.gov (United States)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  9. Encapsulating model complexity and landscape-scale analyses of state-and-transition simulation models: an application of ecoinformatics and juniper encroachment in sagebrush steppe ecosystems

    Science.gov (United States)

    O'Donnell, Michael

    2015-01-01

    State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http

  10. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William; Dietiker, Jean-François; Li, Tingwen; Sarkar, Avik; Sun, Xin

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of their inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.

  11. Physical modelling of granular flows at multiple-scales and stress levels

    Science.gov (United States)

    Take, Andy; Bowman, Elisabeth; Bryant, Sarah

    2015-04-01

    The rheology of dry granular flows is an area of significant focus within the granular physics, geoscience, and geotechnical engineering research communities. Studies performed to better understand granular flows in manufacturing, materials processing or bulk handling applications have typically focused on the behavior of steady, continuous flows. As a result, much of the research on relating the fundamental interaction of particles to the rheological or constitutive behaviour of granular flows has been performed under (usually) steady-state conditions and low stress levels. However, landslides, which are the primary focus of the geoscience and geotechnical engineering communities, are by nature unsteady flows defined by a finite source volume and at flow depths much larger than typically possible in laboratory experiments. The objective of this paper is to report initial findings of experimental studies currently being conducted using a new large-scale landslide flume (8 m long, 2 m wide slope inclined at 30° with a 35 m long horizontal base section) and at elevated particle self-weight in a 10 m diameter geotechnical centrifuge to investigate the granular flow behavior at multiple-scales and stress levels. The transparent sidewalls of the two flumes used in the experimental investigation permit the combination of observations of particle-scale interaction (using high-speed imaging through transparent vertical sidewalls at over 1000 frames per second) with observations of the distal reach of the landslide debris. These observations are used to investigate the applicability of rheological models developed for steady state flows (e.g. the dimensionless inertial number) in landslide applications and the robustness of depth-averaged approaches to modelling dry granular flow at multiple scales. These observations indicate that the dimensionless inertial number calculated for the flow may be of limited utility except perhaps to define a general state (e.g. liquid

  12. Pelamis wave energy converter. Verification of full-scale control using a 7th scale model

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    The Pelamis Wave Energy Converter is a new concept for converting wave energy for several applications including generation of electric power. The machine is flexibly moored and swings to meet the water waves head-on. The system is semi-submerged and consists of cylindrical sections linked by hinges. The mechanical operation is described in outline. A one-seventh scale model was built and tested and the outcome was sufficiently successful to warrant the building of a full-scale prototype. In addition, a one-twentieth scale model was built and has contributed much to the research programme. The work is supported financially by the DTI.

  13. New Empirical Earthquake Source‐Scaling Laws

    KAUST Repository

    Thingbaijam, Kiran Kumar S.

    2017-12-13

    We develop new empirical scaling laws for rupture width W, rupture length L, rupture area A, and average slip D, based on a large database of rupture models. The database incorporates recent earthquake source models in a wide magnitude range (M 5.4–9.2) and events of various faulting styles. We apply general orthogonal regression, instead of ordinary least-squares regression, to account for measurement errors of all variables and to obtain mutually self-consistent relationships. We observe that L grows more rapidly with M compared to W. The fault-aspect ratio (L/W) tends to increase with fault dip, which generally increases from reverse-faulting, to normal-faulting, to strike-slip events. At the same time, subduction-inter-face earthquakes have significantly higher W (hence a larger rupture area A) compared to other faulting regimes. For strike-slip events, the growth of W with M is strongly inhibited, whereas the scaling of L agrees with the L-model behavior (D correlated with L). However, at a regional scale for which seismogenic depth is essentially fixed, the scaling behavior corresponds to the W model (D not correlated with L). Self-similar scaling behavior with M − log A is observed to be consistent for all the cases, except for normal-faulting events. Interestingly, the ratio D/W (a proxy for average stress drop) tends to increase with M, except for shallow crustal reverse-faulting events, suggesting the possibility of scale-dependent stress drop. The observed variations in source-scaling properties for different faulting regimes can be interpreted in terms of geological and seismological factors. We find substantial differences between our new scaling relationships and those of previous studies. Therefore, our study provides critical updates on source-scaling relations needed in seismic–tsunami-hazard analysis and engineering applications.

  14. Hydrological modelling over different scales on the edge of the permafrost zone: approaching model realism based on experimentalists' knowledge

    Science.gov (United States)

    Nesterova, Natalia; Makarieva, Olga; Lebedeva, Lyudmila

    2017-04-01

    Quantitative and qualitative experimentalists' data helps to advance both understanding of the runoff generation and modelling strategies. There is significant lack of such information for the dynamic and vulnerable cold regions. The aim of the study is to make use of historically collected experimental hydrological data for modelling poorly-gauged river basins on larger scales near the southern margin of the permafrost zone in Eastern Siberia. Experimental study site "Mogot" includes the Nelka river (30.8 km2) and its three tributaries with watersheds area from 2 to 5.8 km2. It is located in the upper elevated (500 - 1500 m a.s.l.) part of the Amur River basin. Mean annual temperature and precipitation are -7.5°C and 555 mm respectively. Top of the mountains with weak vegetation has well drained soil that prevents any water accumulation. Larch forest on the northern slopes has thick organic layer. It causes shallow active layer and relatively small subsurface water storage. Soil in the southern slopes has thinner organic layer and thaws up to 1.6 m depth. Flood plains are the wettest landscape with highest water storage capacity. Measured monthly evaporation varies from 9 to 100 mm through the year. Experimental data shows importance of air temperature and precipitation changes with the elevation. Their gradient was taken into account for hydrological simulations. Model parameterization was developed according to available quantitative and qualitative data in the Mogot station. The process-based hydrological Hydrograph model was used in the study. It explicitly describes hydrological processes in different permafrost environments. Flexibility of the Hydrograph model allows take advantage from the experimental data for model set-up. The model uses basic meteorological data as input. The level of model complexity is suitable for a remote, sparsely gauged region such as Southern Siberia as it allows for a priori assessment of the model parameters. Model simulation

  15. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  16. Scaling and percolation in the small-world network model

    Energy Technology Data Exchange (ETDEWEB)

    Newman, M. E. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States); Watts, D. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States)

    1999-12-01

    In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society.

  17. Scaling and percolation in the small-world network model

    International Nuclear Information System (INIS)

    Newman, M. E. J.; Watts, D. J.

    1999-01-01

    In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society

  18. Stochastic layer scaling in the two-wire model for divertor tokamaks

    Science.gov (United States)

    Ali, Halima; Punjabi, Alkesh; Boozer, Allen

    2009-06-01

    The question of magnetic field structure in the vicinity of the separatrix in divertor tokamaks is studied. The authors have investigated this problem earlier in a series of papers, using various mathematical techniques. In the present paper, the two-wire model (TWM) [Reiman, A. 1996 Phys. Plasmas 3, 906] is considered. It is noted that, in the TWM, it is useful to consider an extra equation expressing magnetic flux conservation. This equation does not add any more information to the TWM, since the equation is derived from the TWM. This equation is useful for controlling the step size in the numerical integration of the TWM equations. The TWM with the extra equation is called the flux-preserving TWM. Nevertheless, the technique is apparently still plagued by numerical inaccuracies when the perturbation level is low, resulting in an incorrect scaling of the stochastic layer width. The stochastic broadening of the separatrix in the flux-preserving TWM is compared with that in the low mn (poloidal mode number m and toroidal mode number n) map (LMN) [Ali, H., Punjabi, A., Boozer, A. and Evans, T. 2004 Phys. Plasmas 11, 1908]. The flux-preserving TWM and LMN both give Boozer-Rechester 0.5 power scaling of the stochastic layer width with the amplitude of magnetic perturbation when the perturbation is sufficiently large [Boozer, A. and Rechester, A. 1978, Phys. Fluids 21, 682]. The flux-preserving TWM gives a larger stochastic layer width when the perturbation is low, while the LMN gives correct scaling in the low perturbation region. Area-preserving maps such as the LMN respect the Hamiltonian structure of field line trajectories, and have the added advantage of computational efficiency. Also, for a $1\\frac12$ degree of freedom Hamiltonian system such as field lines, maps do not give Arnold diffusion.

  19. NFAP calculation of pressure response of 1/6th scale model containment structure

    International Nuclear Information System (INIS)

    Costantino, C.J.; Pepper, S.; Reich, M.

    1988-01-01

    The details associated with the NFAP calculation of the pressure response of the 1/6th scale model containment structure are discussed in this paper. Comparisons are presented of some of the primary items of interest with those determined from the experiment. It was found from this comparison that the hoop response of the containment wall was adequately predicted by the NFAP finite element calculation, including the response in the high pressure, high strain range at which cracking of the concrete and yielding of the hoop reinforcement occurred. In the vertical or meridional direction, it was found that the model was significantly softer than predicted by the finite element calculation; that is, the vertical strains in the test were three to four times larger than computed in the NFAP calculation. These differences were noted even at low strain levels at which the concrete would not be expected to be cracked under tensile loadings. Simplified calculations for the containment indicate that the vertical stiffness of the wall is similar to that which would be determined by assuming the concrete fully cracked. Thus, the experiment indicates an anomalous behavior in the vertical direction

  20. State-of-the-Art Report on Multi-scale Modelling of Nuclear Fuels

    International Nuclear Information System (INIS)

    Bartel, T.J.; Dingreville, R.; Littlewood, D.; Tikare, V.; Bertolus, M.; Blanc, V.; Bouineau, V.; Carlot, G.; Desgranges, C.; Dorado, B.; Dumas, J.C.; Freyss, M.; Garcia, P.; Gatt, J.M.; Gueneau, C.; Julien, J.; Maillard, S.; Martin, G.; Masson, R.; Michel, B.; Piron, J.P.; Sabathier, C.; Skorek, R.; Toffolon, C.; Valot, C.; Van Brutzel, L.; Besmann, Theodore M.; Chernatynskiy, A.; Clarno, K.; Gorti, S.B.; Radhakrishnan, B.; Devanathan, R.; Dumont, M.; Maugis, P.; El-Azab, A.; Iglesias, F.C.; Lewis, B.J.; Krack, M.; Yun, Y.; Kurata, M.; Kurosaki, K.; Largenton, R.; Lebensohn, R.A.; Malerba, L.; Oh, J.Y.; Phillpot, S.R.; Tulenko, J. S.; Rachid, J.; Stan, M.; Sundman, B.; Tonks, M.R.; Williamson, R.; Van Uffelen, P.; Welland, M.J.; Valot, Carole; Stan, Marius; Massara, Simone; Tarsi, Reka

    2015-10-01

    The Nuclear Science Committee (NSC) of the Nuclear Energy Agency (NEA) has undertaken an ambitious programme to document state-of-the-art of modelling for nuclear fuels and structural materials. The project is being performed under the Working Party on Multi-Scale Modelling of Fuels and Structural Material for Nuclear Systems (WPMM), which has been established to assess the scientific and engineering aspects of fuels and structural materials, describing multi-scale models and simulations as validated predictive tools for the design of nuclear systems, fuel fabrication and performance. The WPMM's objective is to promote the exchange of information on models and simulations of nuclear materials, theoretical and computational methods, experimental validation and related topics. It also provides member countries with up-to-date information, shared data, models, and expertise. The goal is also to assess needs for improvement and address them by initiating joint efforts. The WPMM reviews and evaluates multi-scale modelling and simulation techniques currently employed in the selection of materials used in nuclear systems. It serves to provide advice to the nuclear community on the developments needed to meet the requirements of modelling for the design of different nuclear systems. The original WPMM mandate had three components (Figure 1), with the first component currently completed, delivering a report on the state-of-the-art of modelling of structural materials. The work on modelling was performed by three expert groups, one each on Multi-Scale Modelling Methods (M3), Multi-Scale Modelling of Fuels (M2F) and Structural Materials Modelling (SMM). WPMM is now composed of three expert groups and two task forces providing contributions on multi-scale methods, modelling of fuels and modelling of structural materials. This structure will be retained, with the addition of task forces as new topics are developed. The mandate of the Expert Group on Multi-Scale Modelling of

  1. Reference Priors for the General Location-Scale Model

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo 1992) is applied to multivariate location-scale models with any regular sampling density, where we establish the irrelevance of the usual assumption of Normal sampling if our interest is in either the location or the scale. This result immediately

  2. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    Science.gov (United States)

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  3. Model Scaling of Hydrokinetic Ocean Renewable Energy Systems

    Science.gov (United States)

    von Ellenrieder, Karl; Valentine, William

    2013-11-01

    Numerical simulations are performed to validate a non-dimensional dynamic scaling procedure that can be applied to subsurface and deeply moored systems, such as hydrokinetic ocean renewable energy devices. The prototype systems are moored in water 400 m deep and include: subsurface spherical buoys moored in a shear current and excited by waves; an ocean current turbine excited by waves; and a deeply submerged spherical buoy in a shear current excited by strong current fluctuations. The corresponding model systems, which are scaled based on relative water depths of 10 m and 40 m, are also studied. For each case examined, the response of the model system closely matches the scaled response of the corresponding full-sized prototype system. The results suggest that laboratory-scale testing of complete ocean current renewable energy systems moored in a current is possible. This work was supported by the U.S. Southeast National Marine Renewable Energy Center (SNMREC).

  4. Scale-free, axisymmetry galaxy models with little angular momentum

    International Nuclear Information System (INIS)

    Richstone, D.O.

    1980-01-01

    Two scale-free models of elliptical galaxies are constructed using a self-consistent field approach developed by Schwarschild. Both models have concentric, oblate spheroidal, equipotential surfaces, with a logarithmic potential dependence on central distance. The axial ratio of the equipotential surfaces is 4:3, and the extent ratio of density level surfaces id 2.5:1 (corresponding to an E6 galaxy). Each model satisfies the Poisson and steady state Boltzmann equaion for time scales of order 100 galactic years

  5. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  6. Macro-scale turbulence modelling for flows in porous media

    International Nuclear Information System (INIS)

    Pinson, F.

    2006-03-01

    - This work deals with the macroscopic modeling of turbulence in porous media. It concerns heat exchangers, nuclear reactors as well as urban flows, etc. The objective of this study is to describe in an homogenized way, by the mean of a spatial average operator, turbulent flows in a solid matrix. In addition to this first operator, the use of a statistical average operator permits to handle the pseudo-aleatory character of turbulence. The successive application of both operators allows us to derive the balance equations of the kind of flows under study. Two major issues are then highlighted, the modeling of dispersion induced by the solid matrix and the turbulence modeling at a macroscopic scale (Reynolds tensor and turbulent dispersion). To this aim, we lean on the local modeling of turbulence and more precisely on the k - ε RANS models. The methodology of dispersion study, derived thanks to the volume averaging theory, is extended to turbulent flows. Its application includes the simulation, at a microscopic scale, of turbulent flows within a representative elementary volume of the porous media. Applied to channel flows, this analysis shows that even within the turbulent regime, dispersion remains one of the dominating phenomena within the macro-scale modeling framework. A two-scale analysis of the flow allows us to understand the dominating role of the drag force in the kinetic energy transfers between scales. Transfers between the mean part and the turbulent part of the flow are formally derived. This description significantly improves our understanding of the issue of macroscopic modeling of turbulence and leads us to define the sub-filter production and the wake dissipation. A f - f - w >f model is derived. It is based on three balance equations for the turbulent kinetic energy, the viscous dissipation and the wake dissipation. Furthermore, a dynamical predictor for the friction coefficient is proposed. This model is then successfully applied to the study of

  7. Editorial Commentary: The Larger Holes or Larger Number of Holes We Drill in the Coracoid, the Weaker the Coracoid Becomes.

    Science.gov (United States)

    Brady, Paul

    2016-06-01

    The larger holes or larger number of holes we drill in the coracoid, the weaker the coracoid becomes. Thus, minimizing bone holes (both size and number) is required to lower risk of coracoid process fracture, in patients in whom transosseous shoulder acromioclavicular joint reconstruction is indicated. A single 2.4-mm-diameter tunnel drilled through both the clavicle and the coracoid lowers the risk of fracture, but the risk cannot be entirely eliminated. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  8. Application and comparison of the SCS-CN-based rainfall-runoff model in meso-scale watershed and field scale

    Science.gov (United States)

    Luo, L.; Wang, Z.

    2010-12-01

    Soil Conservation Service Curve Number (SCS-CN) based hydrologic model, has widely been used for agricultural watersheds in recent years. However, there will be relative error when applying it due to differentiation of geographical and climatological conditions. This paper introduces a more adaptable and propagable model based on the modified SCS-CN method, which specializes into two different scale cases of research regions. Combining the typical conditions of the Zhanghe irrigation district in southern part of China, such as hydrometeorologic conditions and surface conditions, SCS-CN based models were established. The Xinbu-Qiao River basin (area =1207 km2) and the Tuanlin runoff test area (area =2.87 km2)were taken as the study areas of basin scale and field scale in Zhanghe irrigation district. Applications were extended from ordinary meso-scale watershed to field scale in Zhanghe paddy field-dominated irrigated . Based on actual measurement data of land use, soil classification, hydrology and meteorology, quantitative evaluation and modifications for two coefficients, i.e. preceding loss and runoff curve, were proposed with corresponding models, table of CN values for different landuse and AMC(antecedent moisture condition) grading standard fitting for research cases were proposed. The simulation precision was increased by putting forward a 12h unit hydrograph of the field area, and 12h unit hydrograph were simplified. Comparison between different scales show that it’s more effectively to use SCS-CN model on field scale after parameters calibrated in basin scale These results can help discovering the rainfall-runoff rule in the district. Differences of established SCS-CN model's parameters between the two study regions are also considered. Varied forms of landuse and impacts of human activities were the important factors which can impact the rainfall-runoff relations in Zhanghe irrigation district.

  9. Multi-scale habitat selection modeling: A review and outlook

    Science.gov (United States)

    Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman

    2016-01-01

    Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.

  10. Multi-scale model of epidemic fade-out: Will local extirpation events inhibit the spread of white-nose syndrome?

    Science.gov (United States)

    O'Reagan, Suzanne M; Magori, Krisztian; Pulliam, J Tomlin; Zokan, Marcus A; Kaul, RajReni B; Barton, Heather D; Drake, John M

    2015-04-01

    White-nose syndrome (WNS) is an emerging infectious disease that has resulted in severe declines of its hibernating bat hosts in North America. The ongoing epidemic of white-nose syndrome is a multi-scale phenomenon becau.se it causes hibernaculum-level extirpations, while simultaneously spreading over larger spatial scales. We investigate a neglected topic in ecological epidemiology: how local pathogen-driven extirpations impact large-scale pathogen spread. Previous studies have identified risk factors for propagation of WNS over hibernaculum and landscape scales but none of these have tested the hypothesis that separation of spatial scales and disease-induced mortality at the hibernaculum level might slow or halt its spread. To test this hypothesis, we developed a mechanistic multi-scale model parameterized using white-nose syndrome.county and site incidence data that connects hibernaculum-level susceptible-infectious-removed (SIR) epidemiology to the county-scale contagion process. Our key result is that hibernaculum-level extirpations will not inhibit county-scale spread of WNS. We show that over 80% of counties of the contiguous USA are likely to become infected before the current epidemic is over and that geometry of habitat connectivity is such that host refuges are exceedingly rare. The macroscale spatiotemporal infection pattern that emerges from local SIR epidemiological processes falls within a narrow spectrum of possible outcomes, suggesting that recolonization, rescue effects, and multi-host complexities at local scales are not important to forward propagation of WNS at large spatial scales. If effective control measures are not implemented, precipitous declines in bat populations are likely, particularly in cave-dense regions that constitute the main geographic corridors of the USA, a serious concern for bat conservation.

  11. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    Science.gov (United States)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  12. 1/3-scale model testing program

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.

    1989-01-01

    This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system

  13. Factors influencing the parameterization of anvil clouds within general circulation models

    International Nuclear Information System (INIS)

    Leone, J.M. Jr.; Chin, H.N.

    1994-01-01

    The overall goal of this project is to improve the representation of clouds and their effects within global climate models (GCMs). We have concentrated on a small portion of the overall goal, the evolution of convectively generated cirrus clouds and their effects on the large-scale environment. Because of the large range of time and length scales involved, we have been using a multi-scale attack. For the early time generation and development of the cirrus anvil, we are using a cloud-scale model with horizontal resolution of 1 to 2 kilometers; for the larger scale transport by the larger scale flow, we are using a mesoscale model with a horizontal resolution of 20 to 60 kilometers. The eventual goal is to use the information obtained from these simulations, together with available observations, to derive improved cloud parameterizations for use in GCMs. This paper presents a new tool, a cirrus generator, that we have developed to aid in our mesoscale studies

  14. The use of scale models in impact testing

    International Nuclear Information System (INIS)

    Donelan, P.J.; Dowling, A.R.

    1985-01-01

    Theoretical analysis, component testing and model flask testing are employed to investigate the validity of scale models for demonstrating the behaviour of Magnox flasks under impact conditions. Model testing is shown to be a powerful and convenient tool provided adequate care is taken with detail design and manufacture of models and with experimental control. (author)

  15. A model-based framework for incremental scale-up of wastewater treatment processes

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Sin, Gürkan

    Scale-up is traditionally done following specific ratios or rules of thumb which do not lead to optimal results. We present a generic framework to assist in scale-up of wastewater treatment processes based on multiscale modelling, multiobjective optimisation and a validation of the model at the new...... large scale. The framework is illustrated by the scale-up of a complete autotropic nitrogen removal process. The model based multiobjective scaleup offers a promising improvement compared to the rule of thumbs based emprical scale up rules...

  16. Global warming may disproportionately affect larger adults in a predatory coral reef fish

    KAUST Repository

    Messmer, Vanessa

    2016-11-03

    Global warming is expected to reduce body sizes of ectothermic animals. Although the underlying mechanisms of size reductions remain poorly understood, effects appear stronger at latitudinal extremes (poles and tropics) and in aquatic rather than terrestrial systems. To shed light on this phenomenon, we examined the size dependence of critical thermal maxima (CTmax) and aerobic metabolism in a commercially important tropical reef fish, the leopard coral grouper (Plectropomus leopardus) following acclimation to current-day (28.5 °C) vs. projected end-of-century (33 °C) summer temperatures for the northern Great Barrier Reef (GBR). CTmax declined from 38.3 to 37.5 °C with increasing body mass in adult fish (0.45-2.82 kg), indicating that larger individuals are more thermally sensitive than smaller conspecifics. This may be explained by a restricted capacity for large fish to increase mass-specific maximum metabolic rate (MMR) at 33 °C compared with 28.5 °C. Indeed, temperature influenced the relationship between metabolism and body mass (0.02-2.38 kg), whereby the scaling exponent for MMR increased from 0.74 ± 0.02 at 28.5 °C to 0.79 ± 0.01 at 33 °C, and the corresponding exponents for standard metabolic rate (SMR) were 0.75 ± 0.04 and 0.80 ± 0.03. The increase in metabolic scaling exponents at higher temperatures suggests that energy budgets may be disproportionately impacted in larger fish and contribute to reduced maximum adult size. Such climate-induced reductions in body size would have important ramifications for fisheries productivity, but are also likely to have knock-on effects for trophodynamics and functioning of ecosystems.

  17. Global warming may disproportionately affect larger adults in a predatory coral reef fish

    KAUST Repository

    Messmer, Vanessa; Pratchett, Morgan S.; Hoey, Andrew S.; Tobin, Andrew J.; Coker, Darren James; Cooke, Steven J.; Clark, Timothy D.

    2016-01-01

    Global warming is expected to reduce body sizes of ectothermic animals. Although the underlying mechanisms of size reductions remain poorly understood, effects appear stronger at latitudinal extremes (poles and tropics) and in aquatic rather than terrestrial systems. To shed light on this phenomenon, we examined the size dependence of critical thermal maxima (CTmax) and aerobic metabolism in a commercially important tropical reef fish, the leopard coral grouper (Plectropomus leopardus) following acclimation to current-day (28.5 °C) vs. projected end-of-century (33 °C) summer temperatures for the northern Great Barrier Reef (GBR). CTmax declined from 38.3 to 37.5 °C with increasing body mass in adult fish (0.45-2.82 kg), indicating that larger individuals are more thermally sensitive than smaller conspecifics. This may be explained by a restricted capacity for large fish to increase mass-specific maximum metabolic rate (MMR) at 33 °C compared with 28.5 °C. Indeed, temperature influenced the relationship between metabolism and body mass (0.02-2.38 kg), whereby the scaling exponent for MMR increased from 0.74 ± 0.02 at 28.5 °C to 0.79 ± 0.01 at 33 °C, and the corresponding exponents for standard metabolic rate (SMR) were 0.75 ± 0.04 and 0.80 ± 0.03. The increase in metabolic scaling exponents at higher temperatures suggests that energy budgets may be disproportionately impacted in larger fish and contribute to reduced maximum adult size. Such climate-induced reductions in body size would have important ramifications for fisheries productivity, but are also likely to have knock-on effects for trophodynamics and functioning of ecosystems.

  18. Global warming may disproportionately affect larger adults in a predatory coral reef fish.

    Science.gov (United States)

    Messmer, Vanessa; Pratchett, Morgan S; Hoey, Andrew S; Tobin, Andrew J; Coker, Darren J; Cooke, Steven J; Clark, Timothy D

    2017-06-01

    Global warming is expected to reduce body sizes of ectothermic animals. Although the underlying mechanisms of size reductions remain poorly understood, effects appear stronger at latitudinal extremes (poles and tropics) and in aquatic rather than terrestrial systems. To shed light on this phenomenon, we examined the size dependence of critical thermal maxima (CTmax) and aerobic metabolism in a commercially important tropical reef fish, the leopard coral grouper (Plectropomus leopardus) following acclimation to current-day (28.5 °C) vs. projected end-of-century (33 °C) summer temperatures for the northern Great Barrier Reef (GBR). CTmax declined from 38.3 to 37.5 °C with increasing body mass in adult fish (0.45-2.82 kg), indicating that larger individuals are more thermally sensitive than smaller conspecifics. This may be explained by a restricted capacity for large fish to increase mass-specific maximum metabolic rate (MMR) at 33 °C compared with 28.5 °C. Indeed, temperature influenced the relationship between metabolism and body mass (0.02-2.38 kg), whereby the scaling exponent for MMR increased from 0.74 ± 0.02 at 28.5 °C to 0.79 ± 0.01 at 33 °C, and the corresponding exponents for standard metabolic rate (SMR) were 0.75 ± 0.04 and 0.80 ± 0.03. The increase in metabolic scaling exponents at higher temperatures suggests that energy budgets may be disproportionately impacted in larger fish and contribute to reduced maximum adult size. Such climate-induced reductions in body size would have important ramifications for fisheries productivity, but are also likely to have knock-on effects for trophodynamics and functioning of ecosystems. © 2016 John Wiley & Sons Ltd.

  19. The Importance of Precise Digital Elevation Models (DEM) in Modelling Floods

    Science.gov (United States)

    Demir, Gokben; Akyurek, Zuhal

    2016-04-01

    Digital elevation Models (DEM) are important inputs for topography for the accurate modelling of floodplain hydrodynamics. Floodplains have a key role as natural retarding pools which attenuate flood waves and suppress flood peaks. GPS, LIDAR and bathymetric surveys are well known surveying methods to acquire topographic data. It is not only time consuming and expensive to obtain topographic data through surveying but also sometimes impossible for remote areas. In this study it is aimed to present the importance of accurate modelling of topography for flood modelling. The flood modelling for Samsun-Terme in Blacksea region of Turkey is done. One of the DEM is obtained from the point observations retrieved from 1/5000 scaled orthophotos and 1/1000 scaled point elevation data from field surveys at x-sections. The river banks are corrected by using the orthophotos and elevation values. This DEM is named as scaled DEM. The other DEM is obtained from bathymetric surveys. 296 538 number of points and the left/right bank slopes were used to construct the DEM having 1 m spatial resolution and this DEM is named as base DEM. Two DEMs were compared by using 27 x-sections. The maximum difference at thalweg of the river bed is 2m and the minimum difference is 20 cm between two DEMs. The channel conveyance capacity in base DEM is larger than the one in scaled DEM and floodplain is modelled in detail in base DEM. MIKE21 with flexible grid is used in 2- dimensional shallow water flow modelling. The model by using two DEMs were calibrated for a flood event (July 9, 2012). The roughness is considered as the calibration parameter. From comparison of input hydrograph at the upstream of the river and output hydrograph at the downstream of the river, the attenuation is obtained as 91% and 84% for the base DEM and scaled DEM, respectively. The time lag in hydrographs does not show any difference for two DEMs and it is obtained as 3 hours. Maximum flood extents differ for the two DEMs

  20. A reduced-order modeling approach to represent subgrid-scale hydrological dynamics for land-surface simulations: application in a polygonal tundra landscape

    Science.gov (United States)

    Pau, G. S. H.; Bisht, G.; Riley, W. J.

    2014-09-01

    Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO2, CH4) exchanges with the atmosphere range from the molecular scale (pore-scale O2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" that reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface-subsurface isothermal simulations were performed for summer months (June-September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998-2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 103) with very small relative approximation error (training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with

  1. Complex scaling in the cluster model

    International Nuclear Information System (INIS)

    Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.

    1987-01-01

    To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs

  2. Ares I Scale Model Acoustic Test Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas

    2011-01-01

    Ares I Scale Model Acoustic Test (ASMAT) is a 5% scale model test of the Ares I vehicle, launch pad and support structures conducted at MSFC to verify acoustic and ignition environments and evaluate water suppression systems Test design considerations 5% measurements must be scaled to full scale requiring high frequency measurements Users had different frequencies of interest Acoustics: 200 - 2,000 Hz full scale equals 4,000 - 40,000 Hz model scale Ignition Transient: 0 - 100 Hz full scale equals 0 - 2,000 Hz model scale Environment exposure Weather exposure: heat, humidity, thunderstorms, rain, cold and snow Test environments: Plume impingement heat and pressure, and water deluge impingement Several types of sensors were used to measure the environments Different instrument mounts were used according to the location and exposure to the environment This presentation addresses the observed effects of the selected sensors and mount design on the acoustic and pressure measurements

  3. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    Energy Technology Data Exchange (ETDEWEB)

    E. Gonnenthal; N. Spyoher

    2001-02-05

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [153447]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: (1) Performance Assessment (PA); (2) Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); (3) UZ Flow and Transport Process Model Report (PMR); and (4) Near-Field Environment (NFE) PMR. The work scope for this activity is presented in the TWPs cited above, and summarized as follows: continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data

  4. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    International Nuclear Information System (INIS)

    Sonnenthale, E.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [1534471]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: Performance Assessment (PA); Near-Field Environment (NFE) PMR; Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); and UZ Flow and Transport Process Model Report (PMR). The work scope for this activity is presented in the TWPs cited above, and summarized as follows: Continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are

  5. Validity of thermally-driven small-scale ventilated filling box models

    Science.gov (United States)

    Partridge, Jamie L.; Linden, P. F.

    2013-11-01

    The majority of previous work studying building ventilation flows at laboratory scale have used saline plumes in water. The production of buoyancy forces using salinity variations in water allows dynamic similarity between the small-scale models and the full-scale flows. However, in some situations, such as including the effects of non-adiabatic boundaries, the use of a thermal plume is desirable. The efficacy of using temperature differences to produce buoyancy-driven flows representing natural ventilation of a building in a small-scale model is examined here, with comparison between previous theoretical and new, heat-based, experiments.

  6. Scaling behaviour of the global tropopause

    Directory of Open Access Journals (Sweden)

    C. Varotsos

    2009-01-01

    Full Text Available Detrended fluctuation analysis is applied to the time series of the global tropopause height derived from the 1980–2004 daily radiosonde data, in order to detect long-range correlations in its time evolution.

    Global tropopause height fluctuations in small time-intervals are found to be positively correlated to those in larger time intervals in a power-law fashion. The exponent of this dependence is larger in the tropics than in the middle and high latitudes in both hemispheres. Greater persistence is observed in the tropopause of the Northern than in the Southern Hemisphere. A plausible physical explanation of the fact that long-range correlations in tropopause variability decreases with increasing latitude is that the column ozone fluctuations (that are closely related with the tropopause ones exhibit long range correlations, which are larger in tropics than in the middle and high latitudes at long time scales.

    This finding for the tropopause height variability should reduce the existing uncertainties in assessing the climatic characteristics. More specifically the reliably modelled values of a climatic variable (i.e. past and future simulations must exhibit the same scaling behaviour with that possibly existing in the real observations of the variable under consideration. An effort has been made to this end by applying the detrended fluctuation analysis to the global mean monthly land and sea surface temperature anomalies during the period January 1850–August 2008. The result obtained supports the findings presented above, notably: the correlations between the fluctuations in the global mean monthly land and sea surface temperature display scaling behaviour which must characterizes any projection.

  7. Multi-scale modeling of carbon capture systems

    Energy Technology Data Exchange (ETDEWEB)

    Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO2 capture. The sorbent model includes a detailed treatment of transport and amine-CO2- H2O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.

  8. Pore-scale studies of multiphase flow and reaction involving CO2 sequestration in geologic formations

    Science.gov (United States)

    Kang, Q.; Wang, M.; Lichtner, P. C.

    2008-12-01

    In geologic CO2 sequestration, pore-scale interfacial phenomena ultimately govern the key processes of fluid mobility, chemical transport, adsorption, and reaction. However, spatial heterogeneity at the pore scale cannot be resolved at the continuum scale, where averaging occurs over length scales much larger than typical pore sizes. Natural porous media, such as sedimentary rocks and other geological media encountered in subsurface formations, are inherently heterogeneous. This pore-scale heterogeneity can produce variabilities in flow, transport, and reaction processes that take place within a porous medium, and can result in spatial variations in fluid velocity, aqueous concentrations, and reaction rates. Consequently, the unresolved spatial heterogeneity at the pore scale may be important for reactive transport modeling at the larger scale. In addition, current continuum models of surface complexation reactions ignore a fundamental property of physical systems, namely conservation of charge. Therefore, to better understand multiphase flow and reaction involving CO2 sequestration in geologic formations, it is necessary to quantitatively investigate the influence of the pore-scale heterogeneity on the emergent behavior at the field scale. We have applied the lattice Boltzmann method to simulating the injection of CO2 saturated brine or supercritical CO2 into geological formations at the pore scale. Multiple pore-scale processes, including advection, diffusion, homogeneous reactions among multiple aqueous species, heterogeneous reactions between the aqueous solution and minerals, ion exchange and surface complexation, as well as changes in solid and pore geometry are all taken into account. The rich pore scale information will provide a basis for upscaling to the continuum scale.

  9. Analysis of the Professional Choice Self-Efficacy Scale Using the Rasch-Andrich Rating Scale Model

    Science.gov (United States)

    Ambiel, Rodolfo A. M.; Noronha, Ana Paula Porto; de Francisco Carvalho, Lucas

    2015-01-01

    The aim of this research was to analyze the psychometrics properties of the professional choice self-efficacy scale (PCSES), using the Rasch-Andrich rating scale model. The PCSES assesses four factors: self-appraisal, gathering occupational information, practical professional information search and future planning. Participants were 883 Brazilian…

  10. Regional CO2 and latent heat surface fluxes in the Southern Great Plains: Measurements, modeling, and scaling

    Energy Technology Data Exchange (ETDEWEB)

    Riley, W. J.; Biraud, S.C.; Torn, M.S.; Fischer, M.L.; Billesbach, D.P.; Berry, J.A.

    2009-08-15

    Characterizing net ecosystem exchanges (NEE) of CO{sub 2} and sensible and latent heat fluxes in heterogeneous landscapes is difficult, yet critical given expected changes in climate and land use. We report here a measurement and modeling study designed to improve our understanding of surface to atmosphere gas exchanges under very heterogeneous land cover in the mostly agricultural U.S. Southern Great Plains (SGP). We combined three years of site-level, eddy covariance measurements in several of the dominant land cover types with regional-scale climate data from the distributed Mesonet stations and Next Generation Weather Radar precipitation measurements to calibrate a land surface model of trace gas and energy exchanges (isotope-enabled land surface model (ISOLSM)). Yearly variations in vegetation cover distributions were estimated from Moderate Resolution Imaging Spectroradiometer normalized difference vegetation index and compared to regional and subregional vegetation cover type estimates from the U.S. Department of Agriculture census. We first applied ISOLSM at a 250 m spatial scale to account for vegetation cover type and leaf area variations that occur on hundred meter scales. Because of computational constraints, we developed a subsampling scheme within 10 km 'macrocells' to perform these high-resolution simulations. We estimate that the Atmospheric Radiation Measurement Climate Research Facility SGP region net CO{sub 2} exchange with the local atmosphere was -240, -340, and -270 gC m{sup -2} yr{sup -1} (positive toward the atmosphere) in 2003, 2004, and 2005, respectively, with large seasonal variations. We also performed simulations using two scaling approaches at resolutions of 10, 30, 60, and 90 km. The scaling approach applied in current land surface models led to regional NEE biases of up to 50 and 20% in weekly and annual estimates, respectively. An important factor in causing these biases was the complex leaf area index (LAI) distribution

  11. Diffusion of two-dimensional epitaxial clusters on metal (100) surfaces: Facile versus nucleation-mediated behavior and their merging for larger sizes

    International Nuclear Information System (INIS)

    Lai, King C.; Liu, Da-Jiang; Evans, James W.

    2017-01-01

    For diffusion of two-dimensional homoepitaxial clusters of N atoms on metal(100) surfaces mediated by edge atom hopping, macroscale continuum theory suggests that the diffusion coefficient scales like DN ~ N -β with β = 3/2. However, we find quite different and diverse behavior in multiple size regimes. These include: (i) facile diffusion for small sizes N < 9; (ii) slow nucleation-mediated diffusion with small β < 1 for “perfect” sizes N = N p = L 2 or L(L+1), for L = 3, 4,… having unique ground state shapes, for moderate sizes 9 ≤ N ≤ O(10 2 ); the same also applies for N = N p +3, N p + 4,… (iii) facile diffusion but with large β > 2 for N = Np + 1 and N p + 2 also for moderate sizes 9 ≤ N ≤ O(10 2 ); (iv) merging of the above distinct branches and subsequent anomalous scaling with 1 ≲ β < 3/2, reflecting the quasi-facetted structure of clusters, for larger N = O(10 2 ) to N = O(10 3 ); and (v) classic scaling with β = 3/2 for very large N = O(103) and above. The specified size ranges apply for typical model parameters. We focus on the moderate size regime where show that diffusivity cycles quasi-periodically from the slowest branch for N p + 3 (not Np) to the fastest branch for Np + 1. Behavior is quantified by Kinetic Monte Carlo simulation of an appropriate stochastic lattice-gas model. However, precise analysis must account for a strong enhancement of diffusivity for short time increments due to back-correlation in the cluster motion. Further understanding of this enhancement, of anomalous size scaling behavior, and of the merging of various branches, is facilitated by combinatorial analysis of the number of the ground state and low-lying excited state cluster configurations, and also of kink populations.

  12. Fixing the EW scale in supersymmetric models after the Higgs discovery

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    TeV-scale supersymmetry was originally introduced to solve the hierarchy problem and therefore fix the electroweak (EW) scale in the presence of quantum corrections. Numerical methods testing the SUSY models often report a good likelihood L (or chi^2=-2ln L) to fit the data {\\it including} the EW scale itself (m_Z^0) with a {\\it simultaneously} large fine-tuning i.e. a large variation of this scale under a small variation of the SUSY parameters. We argue that this is inconsistent and we identify the origin of this problem. Our claim is that the likelihood (or chi^2) to fit the data that is usually reported in such models does not account for the chi^2 cost of fixing the EW scale. When this constraint is implemented, the likelihood (or chi^2) receives a significant correction (delta_chi^2) that worsens the current data fits of SUSY models. We estimate this correction for the models: constrained MSSM (CMSSM), models with non-universal gaugino masses (NUGM) or higgs soft masses (NUHM1, NUHM2), the NMSSM and the ...

  13. Anomalous scaling in an age-dependent branching model

    OpenAIRE

    Keller-Schmidt, Stephanie; Tugrul, Murat; Eguiluz, Victor M.; Hernandez-Garcia, Emilio; Klemm, Konstantin

    2010-01-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age $\\tau$ as $\\tau^{-\\alpha}$. Depending on the exponent $\\alpha$, the scaling of tree depth with tree size $n$ displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition ($\\alpha=1$) tree depth grows as $(\\log n)^2$. This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus p...

  14. Large transverse momentum processes in a non-scaling parton model

    International Nuclear Information System (INIS)

    Stirling, W.J.

    1977-01-01

    The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)

  15. Global MHD Modelling of the ISM - From large towards small scale turbulence

    Science.gov (United States)

    de Avillez, M.; Breitschwerdt, D.

    2005-06-01

    Dealing numerically with the turbulent nature and non-linearity of the physical processes involved in the ISM requires the use of sophisticated numerical schemes coupled to HD and MHD mathematical models. SNe are the main drivers of the interstellar turbulence by transferring kinetic energy into the system. This energy is dissipated by shocks (which is more efficient) and by molecular viscosity. We carried out adaptive mesh refinement simulations (with a finest resolution of 0.625 pc) of the turbulent ISM embedded in a magnetic field with mean field components of 2 and 3 μG. The time scale of our run was 400 Myr, sufficiently long to avoid memory effects of the initial setup, and to allow for a global dynamical equilibrium to be reached in case of a constant energy input rate. It is found that the longitudinal and transverse turbulent length scales have a time averaged (over a period of 50 Myr) ratio of 0.52-0.6, almost similar to the one expected for isotropic homogeneous turbulence. The mean characteristic size of the larger eddies is found to be ˜ 75 pc in both runs. In order to check the simulations against observations, we monitored the OVI and HI column densities within a superbubble created by the explosions of 19 SNe having masses and velocities of the stars that exploded in vicinity of the Sun generating the Local Bubble. The model reproduces the FUSE absorption measurements towards 25 white dwarfs of the OVI column density as function of distance and of N(HI). In particular for lines of sight with lengths smaller than 120 pc it is found that there is no correlation between N(OVI) and N(HI).

  16. Scaling for deuteron structure functions in a relativistic light-front model

    International Nuclear Information System (INIS)

    Polyzou, W.N.; Gloeckle, W.

    1996-01-01

    Scaling limits of the structure functions [B.D. Keister, Phys. Rev. C 37, 1765 (1988)], W 1 and W 2 , are studied in a relativistic model of the two-nucleon system. The relativistic model is defined by a unitary representation, U(Λ,a), of the Poincaracute e group which acts on the Hilbert space of two spinless nucleons. The representation is in Dirac close-quote s [P.A.M. Dirac, Rev. Mod. Phys. 21, 392 (1949)] light-front formulation of relativistic quantum mechanics and is designed to give the experimental deuteron mass and n-p scattering length. A model hadronic current operator that is conserved and covariant with respect to this representation is used to define the structure tensor. This work is the first step in a relativistic extension of the results of Hueber, Gloeckle, and Boemelburg. The nonrelativistic limit of the model is shown to be consistent with the nonrelativistic model of Hueber, Gloeckle, and Boemelburg. [D. Hueber et al. Phys. Rev. C 42, 2342 (1990)]. The relativistic and nonrelativistic scaling limits, for both Bjorken and y scaling are compared. The interpretation of y scaling in the relativistic model is studied critically. The standard interpretation of y scaling requires a soft wave function which is not realized in this model. The scaling limits in both the relativistic and nonrelativistic case are related to probability distributions associated with the target deuteron. copyright 1996 The American Physical Society

  17. Larger error signals in major depression are associated with better avoidance learning

    Directory of Open Access Journals (Sweden)

    James F eCavanagh

    2011-11-01

    Full Text Available The medial prefrontal cortex (mPFC is particularly reactive to signals of error, punishment, and conflict in the service of behavioral adaptation and it is consistently implicated in the etiology of Major Depressive Disorder (MDD. This association makes conceptual sense, given that MDD has been associated with hyper-reactivity in neural systems associated with punishment processing. Yet in practice, depression-related variance in measures of mPFC functioning often fails to relate to performance. For example, neuroelectric reflections of mediofrontal error signals are often found to be larger in MDD, but a deficit in post-error performance suggests that these error signals are not being used to rapidly adapt behavior. Thus, it remains unknown if depression-related variance in error signals reflects a meaningful alteration in the use of error or punishment information. However, larger mediofrontal error signals have also been related to another behavioral tendency: increased accuracy in avoidance learning. The integrity of this error-avoidance system remains untested in MDD. In this study, EEG was recorded as 21 symptomatic, drug-free participants with current or past MDD and 24 control participants performed a probabilistic reinforcement learning task. Depressed participants had larger mPFC EEG responses to error feedback than controls. The direct relationship between error signal amplitudes and avoidance learning accuracy was replicated. Crucially, this relationship was stronger in depressed participants for high conflict lose-lose situations, demonstrating a selective alteration of avoidance learning. This investigation provided evidence that larger error signal amplitudes in depression are associated with increased avoidance learning, identifying a candidate mechanistic model for hypersensitivity to negative outcomes in depression.

  18. Why have microsaccades become larger?

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Nyström, Marcus; Andersson, Richard

    2014-01-01

    -trackers compared to the systems used in the classical studies, in combination with the lack of a systematic algorithmic treatment of the overshoot. We hope that awareness of these discrepancies in microsaccade dynamics across eye structures will lead to more generally accepted definitions of microsaccades....... experts. The main reason was that the overshoots were not systematically detected by the algorithm and therefore not accurately accounted for. We conclude that one reason to why the reported size of microsaccades has increased is due to the larger overshoots produced by the modern pupil-based eye...

  19. Toward micro-scale spatial modeling of gentrification

    Science.gov (United States)

    O'Sullivan, David

    A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.

  20. Spatial Scaling of Environmental Variables Improves Species-Habitat Models of Fishes in a Small, Sand-Bed Lowland River.

    Directory of Open Access Journals (Sweden)

    Johannes Radinger

    Full Text Available Habitat suitability and the distinct mobility of species depict fundamental keys for explaining and understanding the distribution of river fishes. In recent years, comprehensive data on river hydromorphology has been mapped at spatial scales down to 100 m, potentially serving high resolution species-habitat models, e.g., for fish. However, the relative importance of specific hydromorphological and in-stream habitat variables and their spatial scales of influence is poorly understood. Applying boosted regression trees, we developed species-habitat models for 13 fish species in a sand-bed lowland river based on river morphological and in-stream habitat data. First, we calculated mean values for the predictor variables in five distance classes (from the sampling site up to 4000 m up- and downstream to identify the spatial scale that best predicts the presence of fish species. Second, we compared the suitability of measured variables and assessment scores related to natural reference conditions. Third, we identified variables which best explained the presence of fish species. The mean model quality (AUC = 0.78, area under the receiver operating characteristic curve significantly increased when information on the habitat conditions up- and downstream of a sampling site (maximum AUC at 2500 m distance class, +0.049 and topological variables (e.g., stream order were included (AUC = +0.014. Both measured and assessed variables were similarly well suited to predict species' presence. Stream order variables and measured cross section features (e.g., width, depth, velocity were best-suited predictors. In addition, measured channel-bed characteristics (e.g., substrate types and assessed longitudinal channel features (e.g., naturalness of river planform were also good predictors. These findings demonstrate (i the applicability of high resolution river morphological and instream-habitat data (measured and assessed variables to predict fish presence, (ii the

  1. A constitutive model of soft tissue: From nanoscale collagen to tissue continuum

    KAUST Repository

    Tang, Huang

    2009-04-08

    Soft collagenous tissue features many hierarchies of structure, starting from tropocollagen molecules that form fibrils, and proceeding to a bundle of fibrils that form fibers. Here we report the development of an atomistically informed continuum model of collagenous tissue. Results from full atomistic and molecular modeling are linked with a continuum theory of a fiber-reinforced composite, handshaking the fibril scale to the fiber and continuum scale in a hierarchical multi-scale simulation approach. Our model enables us to study the continuum-level response of the tissue as a function of cross-link density, making a link between nanoscale collagen features and material properties at larger tissue scales. The results illustrate a strong dependence of the continuum response as a function of nanoscopic structural features, providing evidence for the notion that the molecular basis for protein materials is important in defining their larger-scale mechanical properties. © 2009 Biomedical Engineering Society.

  2. Scale Model Thruster Acoustic Measurement Results

    Science.gov (United States)

    Vargas, Magda; Kenny, R. Jeremy

    2013-01-01

    The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.

  3. Modelling of large-scale structures arising under developed turbulent convection in a horizontal fluid layer (with application to the problem of tropical cyclone origination

    Directory of Open Access Journals (Sweden)

    G. V. Levina

    2000-01-01

    Full Text Available The work is concerned with the results of theoretical and laboratory modelling the processes of the large-scale structure generation under turbulent convection in the rotating-plane horizontal layer of an incompressible fluid with unstable stratification. The theoretical model describes three alternative ways of creating unstable stratification: a layer heating from below, a volumetric heating of a fluid with internal heat sources and combination of both factors. The analysis of the model equations show that under conditions of high intensity of the small-scale convection and low level of heat loss through the horizontal layer boundaries a long wave instability may arise. The condition for the existence of an instability and criterion identifying the threshold of its initiation have been determined. The principle of action of the discovered instability mechanism has been described. Theoretical predictions have been verified by a series of experiments on a laboratory model. The horizontal dimensions of the experimentally-obtained long-lived vortices are 4÷6 times larger than the thickness of the fluid layer. This work presents a description of the laboratory setup and experimental procedure. From the geophysical viewpoint the examined mechanism of the long wave instability is supposed to be adequate to allow a description of the initial step in the evolution of such large-scale vortices as tropical cyclones - a transition form the small-scale cumulus clouds to the state of the atmosphere involving cloud clusters (the stage of initial tropical perturbation.

  4. Atomic scale modelling of materials of the nuclear fuel cycle

    International Nuclear Information System (INIS)

    Bertolus, M.

    2011-10-01

    This document written to obtain the French accreditation to supervise research presents the research I conducted at CEA Cadarache since 1999 on the atomic scale modelling of non-metallic materials involved in the nuclear fuel cycle: host materials for radionuclides from nuclear waste (apatites), fuel (in particular uranium dioxide) and ceramic cladding materials (silicon carbide). These are complex materials at the frontier of modelling capabilities since they contain heavy elements (rare earths or actinides), exhibit complex structures or chemical compositions and/or are subjected to irradiation effects: creation of point defects and fission products, amorphization. The objective of my studies is to bring further insight into the physics and chemistry of the elementary processes involved using atomic scale modelling and its coupling with higher scale models and experimental studies. This work is organised in two parts: on the one hand the development, adaptation and implementation of atomic scale modelling methods and validation of the approximations used; on the other hand the application of these methods to the investigation of nuclear materials under irradiation. This document contains a synthesis of the studies performed, orientations for future research, a detailed resume and a list of publications and communications. (author)

  5. Transport simulations TFTR: Theoretically-based transport models and current scaling

    International Nuclear Information System (INIS)

    Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.

    1991-12-01

    In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments

  6. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  7. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    Science.gov (United States)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  8. NFAP calculation of the response of a 1/6 scale reinforced concrete containment model

    International Nuclear Information System (INIS)

    Costantino, C.J.; Pepper, S.; Reich, M.

    1989-01-01

    The details associated with the NFAP calculation of the pressure response of the 1/6th scale model containment structure are discussed in this paper. Comparisons are presented of some of the primary items of interest with those determined from the experiment. It was found from this comparison that the hoop response of the containment wall was adequately predicted by the NFAP finite element calculation, including the response in the high pressure, high strain range at which cracking of the concrete and yielding of the hoop reinforcement occurred. In the vertical or meridional direction, it was found that the model was significantly softer than predicted by the finite element calculation; that is, the vertical strains in the test were three to four times larger than computed in the NFAP calculation. These differences were noted even at low strain levels at which the concrete would not be expected to be cracked under tensile loadings. Simplified calculations for the containment indicate that the vertical stiffness of the wall is similar to that which would be determined by assuming the concrete fully cracked. Thus, the experiment indicates an anomalous behavior in the vertical direction

  9. Extension of landscape-based population viability models to ecoregional scales for conservation planning

    Science.gov (United States)

    Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh

    2011-01-01

    Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...

  10. A Survey of Precipitation-Induced Atmospheric Cold Pools over Oceans and Their Interactions with the Larger-Scale Environment

    Science.gov (United States)

    Zuidema, Paquita; Torri, Giuseppe; Muller, Caroline; Chandra, Arunchandra

    2017-11-01

    Pools of air cooled by partial rain evaporation span up to several hundreds of kilometers in nature and typically last less than 1 day, ultimately losing their identity to the large-scale flow. These fundamentally differ in character from the radiatively-driven dry pools defining convective aggregation. Advancement in remote sensing and in computer capabilities has promoted exploration of how precipitation-induced cold pool processes modify the convective spectrum and life cycle. This contribution surveys current understanding of such cold pools over the tropical and subtropical oceans. In shallow convection with low rain rates, the cold pools moisten, preserving the near-surface equivalent potential temperature or increasing it if the surface moisture fluxes cannot ventilate beyond the new surface layer; both conditions indicate downdraft origin air from within the boundary layer. When rain rates exceed ˜ 2 mm h^{-1}, convective-scale downdrafts can bring down drier air of lower equivalent potential temperature from above the boundary layer. The resulting density currents facilitate the lifting of locally thermodynamically favorable air and can impose an arc-shaped mesoscale cloud organization. This organization allows clouds capable of reaching 4-5 km within otherwise dry environments. These are more commonly observed in the northern hemisphere trade wind regime, where the flow to the intertropical convergence zone is unimpeded by the equator. Their near-surface air properties share much with those shown from cold pools sampled in the equatorial Indian Ocean. Cold pools are most effective at influencing the mesoscale organization when the atmosphere is moist in the lower free troposphere and dry above, suggesting an optimal range of water vapor paths. Outstanding questions on the relationship between cold pools, their accompanying moisture distribution and cloud cover are detailed further. Near-surface water vapor rings are documented in one model inside but

  11. RESOLVING NEIGHBORHOOD-SCALE AIR TOXICS MODELING: A CASE STUDY IN WILMINGTON, CALIFORNIA

    Science.gov (United States)

    Air quality modeling is useful for characterizing exposures to air pollutants. While models typically provide results on regional scales, there is a need for refined modeling approaches capable of resolving concentrations on the scale of tens of meters, across modeling domains 1...

  12. Multi Scale Models for Flexure Deformation in Sheet Metal Forming

    Directory of Open Access Journals (Sweden)

    Di Pasquale Edmondo

    2016-01-01

    Full Text Available This paper presents the application of multi scale techniques to the simulation of sheet metal forming using the one-step method. When a blank flows over the die radius, it undergoes a complex cycle of bending and unbending. First, we describe an original model for the prediction of residual plastic deformation and stresses in the blank section. This model, working on a scale about one hundred times smaller than the element size, has been implemented in SIMEX, one-step sheet metal forming simulation code. The utilisation of this multi-scale modeling technique improves greatly the accuracy of the solution. Finally, we discuss the implications of this analysis on the prediction of springback in metal forming.

  13. On what scale should inflationary observables be constrained?

    International Nuclear Information System (INIS)

    Cortes, Marina; Liddle, Andrew R.; Mukherjee, Pia

    2007-01-01

    We examine the choice of scale at which constraints on inflationary observables are presented. We describe an implementation of the hierarchy of inflationary consistency equations which ensures that they remain enforced on different scales, and then seek to optimize the scale for presentation of constraints on marginalized inflationary parameters from WMAP3 data. For models with spectral index running, we find a strong variation of the constraints through the range of observational scales available, and optimize by finding the scale which decorrelates constraints on the spectral index n S and the running. This scale is k=0.017 Mpc -1 , and gives a reduction by a factor of more than four in the allowed parameter area in the n S -r plane (r being the tensor-to-scalar ratio) relative to k=0.002 Mpc -1 . These optimized constraints are similar to those obtained in the no-running case. We also extend the analysis to a larger compilation of data, finding essentially the same conclusions

  14. A Statistical Model and Computer program for Preliminary Calculations Related to the Scaling of Sensor Arrays; TOPICAL

    International Nuclear Information System (INIS)

    Max Morris

    2001-01-01

    Recent advances in sensor technology and engineering have made it possible to assemble many related sensors in a common array, often of small physical size. Sensor arrays may report an entire vector of measured values in each data collection cycle, typically one value per sensor per sampling time. The larger quantities of data provided by larger arrays certainly contain more information, however in some cases experience suggests that dramatic increases in array size do not always lead to corresponding improvements in the practical value of the data. The work leading to this report was motivated by the need to develop computational planning tools to approximate the relative effectiveness of arrays of different size (or scale) in a wide variety of contexts. The basis of the work is a statistical model of a generic sensor array. It includes features representing measurement error, both common to all sensors and independent from sensor to sensor, and the stochastic relationships between the quantities to be measured by the sensors. The model can be used to assess the effectiveness of hypothetical arrays in classifying objects or events from two classes. A computer program is presented for evaluating the misclassification rates which can be expected when arrays are calibrated using a given number of training samples, or the number of training samples required to attain a given level of classification accuracy. The program is also available via email from the first author for a limited time

  15. Genome scale metabolic modeling of cancer

    DEFF Research Database (Denmark)

    Nilsson, Avlant; Nielsen, Jens

    2017-01-01

    of metabolism which allows simulation and hypotheses testing of metabolic strategies. It has successfully been applied to many microorganisms and is now used to study cancer metabolism. Generic models of human metabolism have been reconstructed based on the existence of metabolic genes in the human genome......Cancer cells reprogram metabolism to support rapid proliferation and survival. Energy metabolism is particularly important for growth and genes encoding enzymes involved in energy metabolism are frequently altered in cancer cells. A genome scale metabolic model (GEM) is a mathematical formalization...

  16. European Continental Scale Hydrological Model, Limitations and Challenges

    Science.gov (United States)

    Rouholahnejad, E.; Abbaspour, K.

    2014-12-01

    The pressures on water resources due to increasing levels of societal demand, increasing conflict of interest and uncertainties with regard to freshwater availability create challenges for water managers and policymakers in many parts of Europe. At the same time, climate change adds a new level of pressure and uncertainty with regard to freshwater supplies. On the other hand, the small-scale sectoral structure of water management is now reaching its limits. The integrated management of water in basins requires a new level of consideration where water bodies are to be viewed in the context of the whole river system and managed as a unit within their basins. In this research we present the limitations and challenges of modelling the hydrology of the continent Europe. The challenges include: data availability at continental scale and the use of globally available data, streamgauge data quality and their misleading impacts on model calibration, calibration of large-scale distributed model, uncertainty quantification, and computation time. We describe how to avoid over parameterization in calibration process and introduce a parallel processing scheme to overcome high computation time. We used Soil and Water Assessment Tool (SWAT) program as an integrated hydrology and crop growth simulator to model water resources of the Europe continent. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals for the period of 1970-2006. The use of a large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation and provides the overall picture of water resources temporal and spatial distribution across the continent. The calibrated model and results provide information support to the European Water

  17. Scaling analysis and model estimation of solar corona index

    Science.gov (United States)

    Ray, Samujjwal; Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik

    2018-04-01

    A monthly average solar green coronal index time series for the period from January 1939 to December 2008 collected from NOAA (The National Oceanic and Atmospheric Administration) has been analysed in this paper in perspective of scaling analysis and modelling. Smoothed and de-noising have been done using suitable mother wavelet as a pre-requisite. The Finite Variance Scaling Method (FVSM), Higuchi method, rescaled range (R/S) and a generalized method have been applied to calculate the scaling exponents and fractal dimensions of the time series. Autocorrelation function (ACF) is used to find autoregressive (AR) process and Partial autocorrelation function (PACF) has been used to get the order of AR model. Finally a best fit model has been proposed using Yule-Walker Method with supporting results of goodness of fit and wavelet spectrum. The results reveal an anti-persistent, Short Range Dependent (SRD), self-similar property with signatures of non-causality, non-stationarity and nonlinearity in the data series. The model shows the best fit to the data under observation.

  18. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  19. Cryosphere-hydrosphere interactions: numerical modeling using the Regional Ocean Modeling System (ROMS) at different scales

    Science.gov (United States)

    Bergamasco, A.; Budgell, W. P.; Carniel, S.; Sclavo, M.

    2005-03-01

    Conveyor belt circulation controls global climate through heat and water fluxes with atmosphere and from tropical to polar regions and vice versa. This circulation, commonly referred to as thermohaline circulation (THC), seems to have millennium time scale and nowadays--a non-glacial period--appears to be as rather stable. However, concern is raised by the buildup of CO2 and other greenhouse gases in the atmosphere (IPCC, Third assessment report: Climate Change 2001. A contribution of working group I, II and III to the Third Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge Univ. Press, UK) 2001, http://www.ipcc.ch) as these may affect the THC conveyor paths. Since it is widely recognized that dense-water formation sites act as primary sources in strengthening quasi-stable THC paths (Stommel H., Tellus131961224), in order to simulate properly the consequences of such scenarios a better understanding of these oceanic processes is needed. To successfully model these processes, air-sea-ice-integrated modelling approaches are often required. Here we focus on two polar regions using the Regional Ocean Modeling System (ROMS). In the first region investigated, the North Atlantic-Arctic, where open-ocean deep convection and open-sea ice formation and dispersion under the intense air-sea interactions are the major engines, we use a new version of the coupled hydrodynamic-ice ROMS model. The second area belongs to the Antarctica region inside the Southern Ocean, where brine rejections during ice formation inside shelf seas origin dense water that, flowing along the continental slope, overflow becoming eventually abyssal waters. Results show how nowadays integrated-modelling tasks have become more and more feasible and effective; numerical simulations dealing with large computational domains or challenging different climate scenarios can be run on multi-processors platforms and on systems like LINUX clusters, made of the same hardware as PCs, and

  20. Advanced modeling to accelerate the scale up of carbon capture technologies

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C.; Sun, XIN; Storlie, Curtis B.; Bhattacharyya, Debangsu

    2015-06-01

    In order to help meet the goals of the DOE carbon capture program, the Carbon Capture Simulation Initiative (CCSI) was launched in early 2011 to develop, demonstrate, and deploy advanced computational tools and validated multi-scale models to reduce the time required to develop and scale-up new carbon capture technologies. This article focuses on essential elements related to the development and validation of multi-scale models in order to help minimize risk and maximize learning as new technologies progress from pilot to demonstration scale.

  1. Scale modeling of reinforced concrete structures subjected to seismic loading

    International Nuclear Information System (INIS)

    Dove, R.C.

    1983-01-01

    Reinforced concrete, Category I structures are so large that the possibility of seismicly testing the prototype structures under controlled conditions is essentially nonexistent. However, experimental data, from which important structural properties can be determined and existing and new methods of seismic analysis benchmarked, are badly needed. As a result, seismic experiments on scaled models are of considerable interest. In this paper, the scaling laws are developed in some detail so that assumptions and choices based on judgement can be clearly recognized and their effects discussed. The scaling laws developed are then used to design a reinforced concrete model of a Category I structure. Finally, how scaling is effected by various types of damping (viscous, structural, and Coulomb) is discussed

  2. A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media

    KAUST Repository

    Chen, Huangxin

    2016-06-01

    In this paper, we develop a two-scale reduced model for simulating the Darcy flow in two-dimensional porous media with conductive fractures. We apply the approach motivated by the embedded fracture model (EFM) to simulate the flow on the coarse scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved on unstructured grid which represents the fractures accurately, while in the EFM used on the coarse scale, the flux interaction between fractures and matrix are dealt with as a source term, and the matrix-fracture system can be resolved on structured grid. The Raviart-Thomas mixed finite element methods are used for the solution of the coupled flows in the matrix and the fractures on both fine and coarse scales. Numerical results are presented to demonstrate the efficiency of the proposed model for simulation of flow in fractured porous media.

  3. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  4. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  5. Wind Farm Wake Models From Full Scale Data

    DEFF Research Database (Denmark)

    Knudsen, Torben; Bak, Thomas

    2012-01-01

    This investigation is part of the EU FP7 project “Distributed Control of Large-Scale Offshore Wind Farms”. The overall goal in this project is to develop wind farm controllers giving power set points to individual turbines in the farm in order to minimise mechanical loads and optimise power. One...... on real full scale data. The modelling is based on so called effective wind speed. It is shown that there is a wake for a wind direction range of up to 20 degrees. Further, when accounting for the wind direction it is shown that the two model structures considered can both fit the experimental data...

  6. Preparing the Model for Prediction Across Scales (MPAS) for global retrospective air quality modeling

    Science.gov (United States)

    The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...

  7. Coarse-graining to the meso and continuum scales with molecular-dynamics-like models

    Science.gov (United States)

    Plimpton, Steve

    Many engineering-scale problems that industry or the national labs try to address with particle-based simulations occur at length and time scales well beyond the most optimistic hopes of traditional coarse-graining methods for molecular dynamics (MD), which typically start at the atomic scale and build upward. However classical MD can be viewed as an engine for simulating particles at literally any length or time scale, depending on the models used for individual particles and their interactions. To illustrate I'll highlight several coarse-grained (CG) materials models, some of which are likely familiar to molecular-scale modelers, but others probably not. These include models for water droplet freezing on surfaces, dissipative particle dynamics (DPD) models of explosives where particles have internal state, CG models of nano or colloidal particles in solution, models for aspherical particles, Peridynamics models for fracture, and models of granular materials at the scale of industrial processing. All of these can be implemented as MD-style models for either soft or hard materials; in fact they are all part of our LAMMPS MD package, added either by our group or contributed by collaborators. Unlike most all-atom MD simulations, CG simulations at these scales often involve highly non-uniform particle densities. So I'll also discuss a load-balancing method we've implemented for these kinds of models, which can improve parallel efficiencies. From the physics point-of-view, these models may be viewed as non-traditional or ad hoc. But because they are MD-style simulations, there's an opportunity for physicists to add statistical mechanics rigor to individual models. Or, in keeping with a theme of this session, to devise methods that more accurately bridge models from one scale to the next.

  8. Does lake size matter? Combining morphology and process modeling to examine the contribution of lake classes to population-scale processes

    Science.gov (United States)

    Winslow, Luke A.; Read, Jordan S.; Hanson, Paul C.; Stanley, Emily H.

    2014-01-01

    With lake abundances in the thousands to millions, creating an intuitive understanding of the distribution of morphology and processes in lakes is challenging. To improve researchers’ understanding of large-scale lake processes, we developed a parsimonious mathematical model based on the Pareto distribution to describe the distribution of lake morphology (area, perimeter and volume). While debate continues over which mathematical representation best fits any one distribution of lake morphometric characteristics, we recognize the need for a simple, flexible model to advance understanding of how the interaction between morphometry and function dictates scaling across large populations of lakes. These models make clear the relative contribution of lakes to the total amount of lake surface area, volume, and perimeter. They also highlight the critical thresholds at which total perimeter, area and volume would be evenly distributed across lake size-classes have Pareto slopes of 0.63, 1 and 1.12, respectively. These models of morphology can be used in combination with models of process to create overarching “lake population” level models of process. To illustrate this potential, we combine the model of surface area distribution with a model of carbon mass accumulation rate. We found that even if smaller lakes contribute relatively less to total surface area than larger lakes, the increasing carbon accumulation rate with decreasing lake size is strong enough to bias the distribution of carbon mass accumulation towards smaller lakes. This analytical framework provides a relatively simple approach to upscaling morphology and process that is easily generalizable to other ecosystem processes.

  9. Use of genome-scale microbial models for metabolic engineering

    DEFF Research Database (Denmark)

    Patil, Kiran Raosaheb; Åkesson, M.; Nielsen, Jens

    2004-01-01

    Metabolic engineering serves as an integrated approach to design new cell factories by providing rational design procedures and valuable mathematical and experimental tools. Mathematical models have an important role for phenotypic analysis, but can also be used for the design of optimal metaboli...... network structures. The major challenge for metabolic engineering in the post-genomic era is to broaden its design methodologies to incorporate genome-scale biological data. Genome-scale stoichiometric models of microorganisms represent a first step in this direction....

  10. Probabilistic Downscaling of Remote Sensing Data with Applications for Multi-Scale Biogeochemical Flux Modeling.

    Science.gov (United States)

    Stoy, Paul C; Quaife, Tristan

    2015-01-01

    Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.

  11. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  12. Scaling-up spatially-explicit ecological models using graphics processors

    NARCIS (Netherlands)

    Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis

    2011-01-01

    How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to

  13. A complementary model for medical subspecialty training in South ...

    African Journals Online (AJOL)

    research was to develop a business model to complement the current academic ... larger-scale potential public-private partnerships (PPPs). The model ... complementary system, which will benefit both the private and the public sectors.

  14. Preparatory hydrogeological calculations for site scale models of Aberg, Beberg and Ceberg

    International Nuclear Information System (INIS)

    Gylling, B.; Lindgren, M.; Widen, H.

    1999-03-01

    The purpose of the study is to evaluate the basis for site scale models of the three sites Aberg, Beberg and Ceberg in terms of: extent and position of site scale model domains; numerical implementation of geologic structural model; systematic review of structural data and control of compatibility in data sets. Some of the hydrogeological features of each site are briefly described. A summary of the results from the regional modelling exercises for Aberg, Beberg and Ceberg is given. The results from the regional models may be used as a base for determining the location and size of the site scale models and provide such models with boundary conditions. Results from the regional models may also indicate suitable locations for repositories. The resulting locations and sizes for site scale models are presented in figures. There are also figures showing that the structural models interpreted by HYDRASTAR do not conflict with the repository tunnels. It has in addition been verified with TRAZON, a modified version of HYDRASTAR for checking starting positions, revealing conflicts between starting positions and fractures zones if present

  15. Modeling Lactococcus lactis using a genome-scale flux model

    Directory of Open Access Journals (Sweden)

    Nielsen Jens

    2005-06-01

    Full Text Available Abstract Background Genome-scale flux models are useful tools to represent and analyze microbial metabolism. In this work we reconstructed the metabolic network of the lactic acid bacteria Lactococcus lactis and developed a genome-scale flux model able to simulate and analyze network capabilities and whole-cell function under aerobic and anaerobic continuous cultures. Flux balance analysis (FBA and minimization of metabolic adjustment (MOMA were used as modeling frameworks. Results The metabolic network was reconstructed using the annotated genome sequence from L. lactis ssp. lactis IL1403 together with physiological and biochemical information. The established network comprised a total of 621 reactions and 509 metabolites, representing the overall metabolism of L. lactis. Experimental data reported in the literature was used to fit the model to phenotypic observations. Regulatory constraints had to be included to simulate certain metabolic features, such as the shift from homo to heterolactic fermentation. A minimal medium for in silico growth was identified, indicating the requirement of four amino acids in addition to a sugar. Remarkably, de novo biosynthesis of four other amino acids was observed even when all amino acids were supplied, which is in good agreement with experimental observations. Additionally, enhanced metabolic engineering strategies for improved diacetyl producing strains were designed. Conclusion The L. lactis metabolic network can now be used for a better understanding of lactococcal metabolic capabilities and potential, for the design of enhanced metabolic engineering strategies and for integration with other types of 'omic' data, to assist in finding new information on cellular organization and function.

  16. Wildland Fire Behaviour Case Studies and Fuel Models for Landscape-Scale Fire Modeling

    Directory of Open Access Journals (Sweden)

    Paul-Antoine Santoni

    2011-01-01

    Full Text Available This work presents the extension of a physical model for the spreading of surface fire at landscape scale. In previous work, the model was validated at laboratory scale for fire spreading across litters. The model was then modified to consider the structure of actual vegetation and was included in the wildland fire calculation system Forefire that allows converting the two-dimensional model of fire spread to three dimensions, taking into account spatial information. Two wildland fire behavior case studies were elaborated and used as a basis to test the simulator. Both fires were reconstructed, paying attention to the vegetation mapping, fire history, and meteorological data. The local calibration of the simulator required the development of appropriate fuel models for shrubland vegetation (maquis for use with the model of fire spread. This study showed the capabilities of the simulator during the typical drought season characterizing the Mediterranean climate when most wildfires occur.

  17. Atmospheric dispersion modelling over complex terrain at small scale

    Science.gov (United States)

    Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.

    2014-03-01

    Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.

  18. Incorporating the Impacts of Small Scale Rock Heterogeneity into Models of Flow and Trapping in Target UK CO2 Storage Systems

    Science.gov (United States)

    Jackson, S. J.; Reynolds, C.; Krevor, S. C.

    2017-12-01

    Predictions of the flow behaviour and storage capacity of CO2 in subsurface reservoirs are dependent on accurate modelling of multiphase flow and trapping. A number of studies have shown that small scale rock heterogeneities have a significant impact on CO2flow propagating to larger scales. The need to simulate flow in heterogeneous reservoir systems has led to the development of numerical upscaling techniques which are widely used in industry. Less well understood, however, is the best approach for incorporating laboratory characterisations of small scale heterogeneities into models. At small scales, heterogeneity in the capillary pressure characteristic function becomes significant. We present a digital rock workflow that combines core flood experiments with numerical simulations to characterise sub-core scale capillary pressure heterogeneities within rock cores from several target UK storage reservoirs - the Bunter, Captain and Ormskirk sandstone formations. Measured intrinsic properties (permeability, capillary pressure, relative permeability) and 3D saturations maps from steady-state core flood experiments were the primary inputs to construct a 3D digital rock model in CMG IMEX. We used vertical end-point scaling to iteratively update the voxel by voxel capillary pressure curves from the average MICP curve; with each iteration more closely predicting the experimental saturations and pressure drops. Once characterised, the digital rock cores were used to predict equivalent flow functions, such as relative permeability and residual trapping, across the range of flow conditions estimated to prevail in the CO2 storage reservoirs. In the case of the Captain sandstone, rock cores were characterised across an entire 100m vertical transect of the reservoir. This allowed analysis of the upscaled impact of small scale heterogeneity on flow and trapping. Figure 1 shows the varying degree to which heterogeneity impacted flow depending on the capillary number in the

  19. Description of Muzzle Blast by Modified Ideal Scaling Models

    Directory of Open Access Journals (Sweden)

    Kevin S. Fansler

    1998-01-01

    Full Text Available Gun blast data from a large variety of weapons are scaled and presented for both the instantaneous energy release and the constant energy deposition rate models. For both ideal explosion models, similar amounts of data scatter occur for the peak overpressure but the instantaneous energy release model correlated the impulse data significantly better, particularly for the region in front of the gun. Two parameters that characterize gun blast are used in conjunction with the ideal scaling models to improve the data correlation. The gun-emptying parameter works particularly well with the instantaneous energy release model to improve data correlation. In particular, the impulse, especially in the forward direction of the gun, is correlated significantly better using the instantaneous energy release model coupled with the use of the gun-emptying parameter. The use of the Mach disc location parameter improves the correlation only marginally. A predictive model is obtained from the modified instantaneous energy release correlation.

  20. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  1. Anomalous Scaling Behaviors in a Rice-Pile Model with Two Different Driving Mechanisms

    International Nuclear Information System (INIS)

    Zhang Duanming; Sun Hongzhang; Li Zhihua; Pan Guijun; Yu Boming; Li Rui; Yin Yanping

    2005-01-01

    The moment analysis is applied to perform large scale simulations of the rice-pile model. We find that this model shows different scaling behavior depending on the driving mechanism used. With the noisy driving, the rice-pile model violates the finite-size scaling hypothesis, whereas, with fixed driving, it shows well defined avalanche exponents and displays good finite size scaling behavior for the avalanche size and time duration distributions.

  2. When larger brains do not have more neurons: Increased numbers of cells are compensated by decreased average cell size across mouse individuals

    Directory of Open Access Journals (Sweden)

    Suzana eHerculano-Houzel

    2015-06-01

    Full Text Available There is a strong trend toward increased brain size in mammalian evolution, with larger brains composed of more and larger neurons than smaller brains across species within each mammalian order. Does the evolution of increased numbers of brain neurons, and thus larger brain size, occur simply through the selection of individuals with more and larger neurons, and thus larger brains, within a population? That is, do individuals with larger brains also have more, and larger, neurons than individuals with smaller brains, such that allometric relationships across species are simply an extension of intraspecific scaling? Here we show that this is not the case across adult male mice of a similar age. Rather, increased numbers of neurons across individuals are accompanied by increased numbers of other cells and smaller average cell size of both types, in a trade-off that explains how increased brain mass does not necessarily ensue. Fundamental regulatory mechanisms thus must exist that tie numbers of neurons to numbers of other cells and to average cell size within individual brains. Finally, our results indicate that changes in brain size in evolution are not an extension of individual variation in numbers of neurons, but rather occur through step changes that must simultaneously increase numbers of neurons and cause cell size to increase, rather than decrease.

  3. Hydrogeologic Framework Model for the Saturated Zone Site Scale flow and Transport Model

    Energy Technology Data Exchange (ETDEWEB)

    T. Miller

    2004-11-15

    The purpose of this report is to document the 19-unit, hydrogeologic framework model (19-layer version, output of this report) (HFM-19) with regard to input data, modeling methods, assumptions, uncertainties, limitations, and validation of the model results in accordance with AP-SIII.10Q, Models. The HFM-19 is developed as a conceptual model of the geometric extent of the hydrogeologic units at Yucca Mountain and is intended specifically for use in the development of the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]). Primary inputs to this model report include the GFM 3.1 (DTN: MO9901MWDGFM31.000 [DIRS 103769]), borehole lithologic logs, geologic maps, geologic cross sections, water level data, topographic information, and geophysical data as discussed in Section 4.1. Figure 1-1 shows the information flow among all of the saturated zone (SZ) reports and the relationship of this conceptual model in that flow. The HFM-19 is a three-dimensional (3-D) representation of the hydrogeologic units surrounding the location of the Yucca Mountain geologic repository for spent nuclear fuel and high-level radioactive waste. The HFM-19 represents the hydrogeologic setting for the Yucca Mountain area that covers about 1,350 km2 and includes a saturated thickness of about 2.75 km. The boundaries of the conceptual model were primarily chosen to be coincident with grid cells in the Death Valley regional groundwater flow model (DTN: GS960808312144.003 [DIRS 105121]) such that the base of the site-scale SZ flow model is consistent with the base of the regional model (2,750 meters below a smoothed version of the potentiometric surface), encompasses the exploratory boreholes, and provides a framework over the area of interest for groundwater flow and radionuclide transport modeling. In depth, the model domain extends from land surface to the base of the regional groundwater flow model (D'Agnese et al. 1997 [DIRS 100131], p 2). For the site-scale

  4. Hydrogeologic Framework Model for the Saturated Zone Site Scale flow and Transport Model

    International Nuclear Information System (INIS)

    Miller, T.

    2004-01-01

    The purpose of this report is to document the 19-unit, hydrogeologic framework model (19-layer version, output of this report) (HFM-19) with regard to input data, modeling methods, assumptions, uncertainties, limitations, and validation of the model results in accordance with AP-SIII.10Q, Models. The HFM-19 is developed as a conceptual model of the geometric extent of the hydrogeologic units at Yucca Mountain and is intended specifically for use in the development of the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]). Primary inputs to this model report include the GFM 3.1 (DTN: MO9901MWDGFM31.000 [DIRS 103769]), borehole lithologic logs, geologic maps, geologic cross sections, water level data, topographic information, and geophysical data as discussed in Section 4.1. Figure 1-1 shows the information flow among all of the saturated zone (SZ) reports and the relationship of this conceptual model in that flow. The HFM-19 is a three-dimensional (3-D) representation of the hydrogeologic units surrounding the location of the Yucca Mountain geologic repository for spent nuclear fuel and high-level radioactive waste. The HFM-19 represents the hydrogeologic setting for the Yucca Mountain area that covers about 1,350 km2 and includes a saturated thickness of about 2.75 km. The boundaries of the conceptual model were primarily chosen to be coincident with grid cells in the Death Valley regional groundwater flow model (DTN: GS960808312144.003 [DIRS 105121]) such that the base of the site-scale SZ flow model is consistent with the base of the regional model (2,750 meters below a smoothed version of the potentiometric surface), encompasses the exploratory boreholes, and provides a framework over the area of interest for groundwater flow and radionuclide transport modeling. In depth, the model domain extends from land surface to the base of the regional groundwater flow model (D'Agnese et al. 1997 [DIRS 100131], p 2). For the site-scale SZ flow model, the HFM

  5. A drainage basin scale model for earthflow-prone landscapes over geomorphic timescales

    Science.gov (United States)

    Booth, A. M.; Roering, J. J.

    2009-12-01

    Landscape evolution models can be informative tools for understanding how sediment transport processes, regulated by tectonic and climatic forcing, interact to control fundamental landscape characteristics such as relief, channel network organization, and hillslope form. Many studies have proposed simple mathematical geomorphic transport laws for modeling hillslope and fluvial processes, and these models are capable of generating synthetic landscapes similar to many of those observed in nature. However, deep-seated mass movements dominate the topographic development of many tectonically active landscapes, yet few compelling transport laws exist for accurately describing these processes at the drainage basin scale. Specifically, several detailed field and theoretical studies describe the mechanics of deep-seated earthflows, such as those found throughout the northern California coast ranges, but these studies are often restricted to a single earthflow site. Here, we generalize earthflow behavior to larger spatial and geomorphically significant temporal scales using a mathematical model to determine how interactions between earthflow, weathering, hillslope, and fluvial processes control sediment flux and topographic form. The model couples the evolution of the land surface with the evolution of a weathered zone driven by fluctuations in the groundwater table. The lower boundary of this weathered zone sets the potential failure plane for earthflows, which occur once the shear stress on this plane exceeds a threshold value. Earthflows deform downslope with a non-Newtonian viscous rheology while gullying, modeled with a stream power equation, and soil creep, modeled with a diffusion equation, continuously act on the land surface. To compare the intensities of these different processes, we define a characteristic timescale for each modeled process, and demonstrate how the ratios of these timescales control the steady-state topographic characteristics of the simulated

  6. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  7. Vortex forcing model for turbulent flow over spanwise-heterogeneous topogrpahies: scaling arguments and similarity solution

    Science.gov (United States)

    Anderson, William; Yang, Jianzhi

    2017-11-01

    Spanwise surface heterogeneity beneath high-Reynolds number, fully-rough wall turbulence is known to induce mean secondary flows in the form of counter-rotating streamwise vortices. The secondary flows are a manifestation of Prandtl's secondary flow of the second kind - driven and sustained by spatial heterogeneity of components of the turbulent (Reynolds averaged) stress tensor. The spacing between adjacent surface heterogeneities serves as a control on the spatial extent of the counter-rotating cells, while their intensity is controlled by the spanwise gradient in imposed drag (where larger gradients associated with more dramatic transitions in roughness induce stronger cells). In this work, we have performed an order of magnitude analysis of the mean (Reynolds averaged) streamwise vorticity transport equation, revealing the scaling dependence of circulation upon spanwise spacing. The scaling arguments are supported by simulation data. Then, we demonstrate that mean streamwise velocity can be predicted a priori via a similarity solution to the mean streamwise vorticity transport equation. A vortex forcing term was used to represent the affects of spanwise topographic heterogeneity within the flow. Efficacy of the vortex forcing term was established with large-eddy simulation cases, wherein vortex forcing model parameters were altered to capture different values of spanwise spacing.

  8. Anomalous scaling in an age-dependent branching model.

    Science.gov (United States)

    Keller-Schmidt, Stephanie; Tuğrul, Murat; Eguíluz, Víctor M; Hernández-García, Emilio; Klemm, Konstantin

    2015-02-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age τ as τ(-α). Depending on the exponent α, the scaling of tree depth with tree size n displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition (α=1) tree depth grows as (logn)(2). This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus providing a theoretical support for age-dependent speciation and associating it to the occurrence of a critical point.

  9. Air scaling and modeling studies for the 1/5-scale mark I boiling water reactor pressure suppression experiment

    Energy Technology Data Exchange (ETDEWEB)

    Lai, W.; McCauley, E.W.

    1978-01-04

    Results of table-top model experiments performed to investigate pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peach Bottom Mark I boiling water reactor containment system guided subsequent conduct of the 1/5-scale torus experiment and provided new insight into the vertical load function (VLF). Pool dynamics results were qualitatively correct. Experiments with a 1/64-scale fully modeled drywell and torus showed that a 90/sup 0/ torus sector was adequate to reveal three-dimensional effects; the 1/5-scale torus experiment confirmed this.

  10. Air scaling and modeling studies for the 1/5-scale mark I boiling water reactor pressure suppression experiment

    International Nuclear Information System (INIS)

    Lai, W.; McCauley, E.W.

    1978-01-01

    Results of table-top model experiments performed to investigate pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peach Bottom Mark I boiling water reactor containment system guided subsequent conduct of the 1/5-scale torus experiment and provided new insight into the vertical load function (VLF). Pool dynamics results were qualitatively correct. Experiments with a 1/64-scale fully modeled drywell and torus showed that a 90 0 torus sector was adequate to reveal three-dimensional effects; the 1/5-scale torus experiment confirmed this

  11. Doubly stochastic Poisson process models for precipitation at fine time-scales

    Science.gov (United States)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  12. Bridging scales through multiscale modeling: A case study on Protein Kinase A

    Directory of Open Access Journals (Sweden)

    Sophia P Hirakis

    2015-09-01

    Full Text Available The goal of multiscale modeling in biology is to use structurally based physico-chemical models to integrate across temporal and spatial scales of biology and thereby improve mechanistic understanding of, for example, how a single mutation can alter organism-scale phenotypes. This approach may also inform therapeutic strategies or identify candidate drug targets that might otherwise have been overlooked. However, in many cases, it remains unclear how best to synthesize information obtained from various scales and analysis approaches, such as atomistic molecular models, Markov state models (MSM, subcellular network models, and whole cell models. In this paper, we use protein kinase A (PKA activation as a case study to explore how computational methods that model different physical scales can complement each other and integrate into an improved multiscale representation of the biological mechanisms. Using measured crystal structures, we show how molecular dynamics (MD simulations coupled with atomic-scale MSMs can provide conformations for Brownian dynamics (BD simulations to feed transitional states and kinetic parameters into protein-scale MSMs. We discuss how milestoning can give reaction probabilities and forward-rate constants of cAMP association events by seamlessly integrating MD and BD simulation scales. These rate constants coupled with MSMs provide a robust representation of the free energy landscape, enabling access to kinetic and thermodynamic parameters unavailable from current experimental data. These approaches have helped to illuminate the cooperative nature of PKA activation in response to distinct cAMP binding events. Collectively, this approach exemplifies a general strategy for multiscale model development that is applicable to a wide range of biological problems.

  13. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  14. A pragmatic approach to modelling soil and water conservation measures with a cathment scale erosion model.

    NARCIS (Netherlands)

    Hessel, R.; Tenge, A.J.M.

    2008-01-01

    To reduce soil erosion, soil and water conservation (SWC) methods are often used. However, no method exists to model beforehand how implementing such measures will affect erosion at catchment scale. A method was developed to simulate the effects of SWC measures with catchment scale erosion models.

  15. Upscaling of U (VI) desorption and transport from decimeter‐scale heterogeneity to plume‐scale modeling

    Science.gov (United States)

    Curtis, Gary P.; Kohler, Matthias; Kannappan, Ramakrishnan; Briggs, Martin A.; Day-Lewis, Frederick D.

    2015-01-01

    Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research were to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.

  16. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...

  17. Scaling considerations related to interactions of hydrologic, pedologic and geomorphic processes (Invited)

    Science.gov (United States)

    Sidle, R. C.

    2013-12-01

    Hydrologic, pedologic, and geomorphic processes are strongly interrelated and affected by scale. These interactions exert important controls on runoff generation, preferential flow, contaminant transport, surface erosion, and mass wasting. Measurement of hydraulic conductivity (K) and infiltration capacity at small scales generally underestimates these values for application at larger field, hillslope, or catchment scales. Both vertical and slope-parallel saturated flow and related contaminant transport are often influenced by interconnected networks of preferential flow paths, which are not captured in K measurements derived from soil cores. Using such K values in models may underestimate water and contaminant fluxes and runoff peaks. As shown in small-scale runoff plot studies, infiltration rates are typically lower than integrated infiltration across a hillslope or in headwater catchments. The resultant greater infiltration-excess overland flow in small plots compared to larger landscapes is attributed to the lack of preferential flow continuity; plot border effects; greater homogeneity of rainfall inputs, topography and soil physical properties; and magnified effects of hydrophobicity in small plots. At the hillslope scale, isolated areas with high infiltration capacity can greatly reduce surface runoff and surface erosion at the hillslope scale. These hydropedologic and hydrogeomorphic processes are also relevant to both occurrence and timing of landslides. The focus of many landslide studies has typically been either on small-scale vadose zone process and how these affect soil mechanical properties or on larger scale, more descriptive geomorphic studies. One of the issues in translating laboratory-based investigations on geotechnical behavior of soils to field scales where landslides occur is the characterization of large-scale hydrological processes and flow paths that occur in heterogeneous and anisotropic porous media. These processes are not only affected

  18. Confirmatory Factor Analysis of the Combined Social Phobia Scale and Social Interaction Anxiety Scale: Support for a Bifactor Model

    OpenAIRE

    Gomez, Rapson; Watson, Shaun D.

    2017-01-01

    For the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) together, this study examined support for a bifactor model, and also the internal consistency reliability and external validity of the factors in this model. Participants (N = 526) were adults from the general community who completed the SPS and SIAS. Confirmatory factor analysis (CFA) of their ratings indicated good support for the bifactor model. For this model, the loadings for all but six items were higher o...

  19. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  20. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  1. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  2. FFTF scale-model characterization of flow-induced vibrational response of reactor internals

    International Nuclear Information System (INIS)

    Ryan, J.A.; Julyk, L.J.

    1977-01-01

    As an integral part of the Fast Test Reactor Vibration Program for Reactor Internals, the flow-induced vibrational characteristics of scaled Fast Test Reactor core internal and peripheral components were assessed under scaled and simulated prototype flow conditions in the Hydraulic Core Mockup. The Hydraulic Core Mockup, a 0.285 geometric scale model, was designed to model the vibrational and hydraulic characteristics of the Fast Test Reactor. Model component vibrational characteristics were measured and determined over a range of 36 percent to 111 percent of the scaled prototype design flow. Selected model and prototype components were shaker tested to establish modal characteristics. The dynamic response of the Hydraulic Core Mockup components exhibited no anomalous flow-rate dependent or modal characteristics, and prototype response predictions were adjudged acceptable

  3. FFTF scale-model characterization of flow induced vibrational response of reactor internals

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, J A; Julyk, L J [Hanford Engineering Development Laboratory, Richland, WA (United States)

    1977-12-01

    As an integral part of the Fast Test Reactor Vibration Program for Reactor Internals, the flow-induced vibrational characteristics of scaled Fast Test Reactor core internal and peripheral components were assessed under scaled and simulated prototype flow conditions in the Hydraulic Core Mockup. The Hydraulic Core Mockup, a 0.285 geometric scale model, was designed to model the vibrational and hydraulic characteristics of the Fast Test Reactor. Model component vibrational characteristics were measured and determined over a range of 36% to 111% of the scaled prototype design flow. Selected model and prototype components were shaker tested to establish modal characteristics. The dynamic response of the Hydraulic Core Mockup components exhibited no anomalous flow-rate dependent or modal characteristics, and prototype response predictions were adjudged acceptable. (author)

  4. FFTF scale-model characterization of flow induced vibrational response of reactor internals

    International Nuclear Information System (INIS)

    Ryan, J.A.; Julyk, L.J.

    1977-01-01

    As an integral part of the Fast Test Reactor Vibration Program for Reactor Internals, the flow-induced vibrational characteristics of scaled Fast Test Reactor core internal and peripheral components were assessed under scaled and simulated prototype flow conditions in the Hydraulic Core Mockup. The Hydraulic Core Mockup, a 0.285 geometric scale model, was designed to model the vibrational and hydraulic characteristics of the Fast Test Reactor. Model component vibrational characteristics were measured and determined over a range of 36% to 111% of the scaled prototype design flow. Selected model and prototype components were shaker tested to establish modal characteristics. The dynamic response of the Hydraulic Core Mockup components exhibited no anomalous flow-rate dependent or modal characteristics, and prototype response predictions were adjudged acceptable. (author)

  5. Small Scale Problems of the ΛCDM Model: A Short Review

    Directory of Open Access Journals (Sweden)

    Antonino Del Popolo

    2017-02-01

    Full Text Available The ΛCDM model, or concordance cosmology, as it is often called, is a paradigm at its maturity. It is clearly able to describe the universe at large scale, even if some issues remain open, such as the cosmological constant problem, the small-scale problems in galaxy formation, or the unexplained anomalies in the CMB. ΛCDM clearly shows difficulty at small scales, which could be related to our scant understanding, from the nature of dark matter to that of gravity; or to the role of baryon physics, which is not well understood and implemented in simulation codes or in semi-analytic models. At this stage, it is of fundamental importance to understand whether the problems encountered by the ΛDCM model are a sign of its limits or a sign of our failures in getting the finer details right. In the present paper, we will review the small-scale problems of the ΛCDM model, and we will discuss the proposed solutions and to what extent they are able to give us a theory accurately describing the phenomena in the complete range of scale of the observed universe.

  6. Analysis, scale modeling, and full-scale test of a railcar and spent-nuclear-fuel shipping cask in a high-velocity impact against a rigid barrier

    International Nuclear Information System (INIS)

    Huerta, M.

    1981-06-01

    This report describes the mathematical analysis, the physical scale modeling, and a full-scale crash test of a railcar spent-nuclear-fuel shipping system. The mathematical analysis utilized a lumped-parameter model to predict the structural response of the railcar and the shipping cask. The physical scale modeling analysis consisted of two crash tests that used 1/8-scale models to assess railcar and shipping cask damage. The full-scale crash test, conducted with retired railcar equipment, was carefully monitored with onboard instrumentation and high-speed photography. Results of the mathematical and scale modeling analyses are compared with the full-scale test. 29 figures

  7. Verification of Simulation Results Using Scale Model Flight Test Trajectories

    National Research Council Canada - National Science Library

    Obermark, Jeff

    2004-01-01

    .... A second compromise scaling law was investigated as a possible improvement. For ejector-driven events at minimum sideslip, the most important variables for scale model construction are the mass moment of inertia and ejector...

  8. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to

  9. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  10. A general model for metabolic scaling in self-similar asymmetric networks.

    Directory of Open Access Journals (Sweden)

    Alexander Byers Brummer

    2017-03-01

    Full Text Available How a particular attribute of an organism changes or scales with its body size is known as an allometry. Biological allometries, such as metabolic scaling, have been hypothesized to result from selection to maximize how vascular networks fill space yet minimize internal transport distances and resistances. The West, Brown, Enquist (WBE model argues that these two principles (space-filling and energy minimization are (i general principles underlying the evolution of the diversity of biological networks across plants and animals and (ii can be used to predict how the resulting geometry of biological networks then governs their allometric scaling. Perhaps the most central biological allometry is how metabolic rate scales with body size. A core assumption of the WBE model is that networks are symmetric with respect to their geometric properties. That is, any two given branches within the same generation in the network are assumed to have identical lengths and radii. However, biological networks are rarely if ever symmetric. An open question is: Does incorporating asymmetric branching change or influence the predictions of the WBE model? We derive a general network model that relaxes the symmetric assumption and define two classes of asymmetrically bifurcating networks. We show that asymmetric branching can be incorporated into the WBE model. This asymmetric version of the WBE model results in several theoretical predictions for the structure, physiology, and metabolism of organisms, specifically in the case for the cardiovascular system. We show how network asymmetry can now be incorporated in the many allometric scaling relationships via total network volume. Most importantly, we show that the 3/4 metabolic scaling exponent from Kleiber's Law can still be attained within many asymmetric networks.

  11. Multi-scale modelling of the hydro-mechanical behaviour of argillaceous rocks

    International Nuclear Information System (INIS)

    Van den Eijnden, Bram

    2015-01-01

    Feasibility studies for deep geological radioactive waste disposal facilities have led to an increased interest in the geomechanical modelling of its host rock. In France, a potential host rock is the Callovo-Oxfordian clay-stone. The low permeability of this material is of key importance, as the principle of deep geological disposal strongly relies on the sealing capacity of the host formation. The permeability being coupled to the mechanical material state, hydro-mechanical coupled behaviour of the clay-stone becomes important when mechanical alterations are induced by gallery excavation in the so-called excavation damaged zone (EDZ). In materials with microstructure such as the Callovo-Oxfordian clay-stone, the macroscopic behaviour has its origin in the interaction of its micromechanical constituents. In addition to the coupling between hydraulic and mechanical behaviour, a coupling between the micro (material microstructure) and macro scale will be made. By means of the development of a framework of computational homogenization for hydro-mechanical coupling, a double-scale modelling approach is formulated, for which the macro-scale constitutive relations are derived from the microscale by homogenization. An existing model for the modelling of hydro-mechanical coupling based on the distinct definition of grains and intergranular pore space is adopted and modified to enable the application of first order computational homogenization for obtaining macro-scale stress and fluid transport responses. This model is used to constitute a periodic representative elementary volume (REV) that allows the representation of the local macroscopic behaviour of the clay-stone. As a response to deformation loading, the behaviour of the REV represents the numerical equivalent of a constitutive relation at the macro-scale. For the required consistent tangent operators, the framework of computational homogenization by static condensation is extended to hydro-mechanical coupling. The

  12. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    International Nuclear Information System (INIS)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-01-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  13. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    Science.gov (United States)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  14. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    Energy Technology Data Exchange (ETDEWEB)

    Reyes, Luz M., E-mail: luzmarinareyes@gmail.com [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Moreno, Claudia, E-mail: claudia.moreno@cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Madriz Aguilar, Jose Edgar, E-mail: edgar.madriz@red.cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Bellini, Mauricio, E-mail: mbellini@mdp.edu.ar [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata (UNMdP), Funes 3350, C.P. 7600, Mar del Plata (Argentina); Instituto de Investigaciones Fisicas de Mar del Plata (IFIMAR) - Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)

    2012-10-22

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  15. Modeling of a Large-Scale High Temperature Regenerative Sulfur Removal Process

    DEFF Research Database (Denmark)

    Konttinen, Jukka T.; Johnsson, Jan Erik

    1999-01-01

    model that does not account for bed hydrodynamics. The pilot-scale test run results, obtained in the test runs of the sulfur removal process with real coal gasifier gas, have been used for parameter estimation. The validity of the reactor model for commercial-scale design applications is discussed.......Regenerable mixed metal oxide sorbents are prime candidates for the removal of hydrogen sulfide from hot gasifier gas in the simplified integrated gasification combined cycle (IGCC) process. As part of the regenerative sulfur removal process development, reactor models are needed for scale......-up. Steady-state kinetic reactor models are needed for reactor sizing, and dynamic models can be used for process control design and operator training. The regenerative sulfur removal process to be studied in this paper consists of two side-by-side fluidized bed reactors operating at temperatures of 400...

  16. Full scale model studies of nuclear power stations for earthquake resistance

    International Nuclear Information System (INIS)

    Kirillov, A.P.; Ambriashvili, Ju. K.; Kozlov, A.V.

    Behaviour of nuclear power plants and its equipments under seismic action is not well understood. In the absence of well established method for aseismic deisgn of nuclear power plants and its equipments, it is necessary to carry out experimental investigations on models, fragments and full scale structures. The present study includes experimental investigations of different scale models and on existing nuclear power stations under impulse and explosion effects simulating seismic loads. The experimental work was aimed to develop on model test procedure for nuclear power station and the evaluation of the possible range of dynamic stresses in structures and pipe lines. The results of full-scale investigations of the nuclear reactor show a good agreement of dynamic characteristics of the model and the prototype. The study confirms the feasibility of simulation of model for nuclear power plants. (auth.)

  17. Modeling fast and slow earthquakes at various scales.

    Science.gov (United States)

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  18. Coalescing colony model: Mean-field, scaling, and geometry

    Science.gov (United States)

    Carra, Giulia; Mallick, Kirone; Barthelemy, Marc

    2017-12-01

    We analyze the coalescing model where a `primary' colony grows and randomly emits secondary colonies that spread and eventually coalesce with it. This model describes population proliferation in theoretical ecology, tumor growth, and is also of great interest for modeling urban sprawl. Assuming the primary colony to be always circular of radius r (t ) and the emission rate proportional to r (t) θ , where θ >0 , we derive the mean-field equations governing the dynamics of the primary colony, calculate the scaling exponents versus θ , and compare our results with numerical simulations. We then critically test the validity of the circular approximation for the colony shape and show that it is sound for a constant emission rate (θ =0 ). However, when the emission rate is proportional to the perimeter, the circular approximation breaks down and the roughness of the primary colony cannot be discarded, thus modifying the scaling exponents.

  19. Validity of the Neuromuscular Recovery Scale: a measurement model approach.

    Science.gov (United States)

    Velozo, Craig; Moorhouse, Michael; Ardolino, Elizabeth; Lorenz, Doug; Suter, Sarah; Basso, D Michele; Behrman, Andrea L

    2015-08-01

    To determine how well the Neuromuscular Recovery Scale (NRS) items fit the Rasch, 1-parameter, partial-credit measurement model. Confirmatory factor analysis (CFA) and principal components analysis (PCA) of residuals were used to determine dimensionality. The Rasch, 1-parameter, partial-credit rating scale model was used to determine rating scale structure, person/item fit, point-measure item correlations, item discrimination, and measurement precision. Seven NeuroRecovery Network clinical sites. Outpatients (N=188) with spinal cord injury. Not applicable. NRS. While the NRS met 1 of 3 CFA criteria, the PCA revealed that the Rasch measurement dimension explained 76.9% of the variance. Ten of 11 items and 91% of the patients fit the Rasch model, with 9 of 11 items showing high discrimination. Sixty-nine percent of the ratings met criteria. The items showed a logical item-difficulty order, with Stand retraining as the easiest item and Walking as the most challenging item. The NRS showed no ceiling or floor effects and separated the sample into almost 5 statistically distinct strata; individuals with an American Spinal Injury Association Impairment Scale (AIS) D classification showed the most ability, and those with an AIS A classification showed the least ability. Items not meeting the rating scale criteria appear to be related to the low frequency counts. The NRS met many of the Rasch model criteria for construct validity. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  20. Bounds on isocurvature perturbations from cosmic microwave background and large scale structure data.

    Science.gov (United States)

    Crotty, Patrick; García-Bellido, Juan; Lesgourgues, Julien; Riazuelo, Alain

    2003-10-24

    We obtain very stringent bounds on the possible cold dark matter, baryon, and neutrino isocurvature contributions to the primordial fluctuations in the Universe, using recent cosmic microwave background and large scale structure data. Neglecting the possible effects of spatial curvature, tensor perturbations, and reionization, we perform a Bayesian likelihood analysis with nine free parameters, and find that the amplitude of the isocurvature component cannot be larger than about 31% for the cold dark matter mode, 91% for the baryon mode, 76% for the neutrino density mode, and 60% for the neutrino velocity mode, at 2sigma, for uncorrelated models. For correlated adiabatic and isocurvature components, the fraction could be slightly larger. However, the cross-correlation coefficient is strongly constrained, and maximally correlated/anticorrelated models are disfavored. This puts strong bounds on the curvaton model.

  1. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  2. SCALING ANALYSIS OF REPOSITORY HEAT LOAD FOR REDUCED DIMENSIONALITY MODELS

    International Nuclear Information System (INIS)

    MICHAEL T. ITAMUA AND CLIFFORD K. HO

    1998-01-01

    The thermal energy released from the waste packages emplaced in the potential Yucca Mountain repository is expected to result in changes in the repository temperature, relative humidity, air mass fraction, gas flow rates, and other parameters that are important input into the models used to calculate the performance of the engineered system components. In particular, the waste package degradation models require input from thermal-hydrologic models that have higher resolution than those currently used to simulate the T/H responses at the mountain-scale. Therefore, a combination of mountain- and drift-scale T/H models is being used to generate the drift thermal-hydrologic environment

  3. Sequential and joint hydrogeophysical inversion using a field-scale groundwater model with ERT and TDEM data

    Directory of Open Access Journals (Sweden)

    D. Herckenrath

    2013-10-01

    Full Text Available Increasingly, ground-based and airborne geophysical data sets are used to inform groundwater models. Recent research focuses on establishing coupling relationships between geophysical and groundwater parameters. To fully exploit such information, this paper presents and compares different hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM and electrical resistivity tomography (ERT data. In a sequential hydrogeophysical inversion (SHI a groundwater model is calibrated with geophysical data by coupling groundwater model parameters with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI. In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical relationship and its accuracy. Simulations for a synthetic groundwater model and TDEM data showed improved estimates for groundwater model parameters that were coupled to relatively well-resolved geophysical parameters when employing a high-quality petrophysical relationship. Compared to a SHI these improvements were insignificant and geophysical parameter estimates became slightly worse. When employing a low-quality petrophysical relationship, groundwater model parameters improved less for both the SHI and JHI, where the SHI performed relatively better. When comparing a SHI and JHI for a real-world groundwater model and ERT data, differences in parameter estimates were small. For both cases investigated in this paper, the SHI seems favorable, taking into account parameter error, data fit and the complexity of implementing a JHI in combination with its larger computational burden.

  4. Decision-Making and Sustainable Drainage: Design and Scale

    Directory of Open Access Journals (Sweden)

    Susanne Charlesworth

    2016-08-01

    Full Text Available Sustainable Drainage (SuDS improves water quality, reduces runoff water quantity, increases amenity and biodiversity benefits, and can also mitigate and adapt to climate change. However, an optimal solution has to be designed to be fit for purpose. Most research concentrates on individual devices, but the focus of this paper is on a full management train, showing the scale-related decision-making process in its design with reference to the city of Coventry, a local government authority in central England. It illustrates this with a large scale site-specific model which identifies the SuDS devices suitable for the area and also at the smaller scale, in order to achieve greenfield runoff rates. A method to create a series of maps using geographical information is shown, to indicate feasible locations for SuDS devices across the local government authority area. Applying the larger scale maps, a management train was designed for a smaller-scale regeneration site using MicroDrainage® software to control runoff at greenfield rates. The generated maps were constructed to provide initial guidance to local government on suitable SuDS at individual sites in a planning area. At all scales, the decision about which device to select was complex and influenced by a range of factors, with slightly different problems encountered. There was overall agreement between large and small scale models.

  5. Local-Scale Simulations of Nucleate Boiling on Micrometer-Featured Surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Moreno, Gilberto [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dede, Ercan M. [Toyota Research Institute of North America; Joshi, Shailesh N. [Toyota Research Institute of North America; Zhou, Feng [Toyota Research Institute of North America

    2017-07-12

    A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulations pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.

  6. Meso-scale effects of tropical deforestation in Amazonia: preparatory LBA modelling studies

    Directory of Open Access Journals (Sweden)

    A. J. Dolman

    1999-08-01

    Full Text Available As part of the preparation for the Large-Scale Biosphere Atmosphere Experiment in Amazonia, a meso-scale modelling study was executed to highlight deficiencies in the current understanding of land surface atmosphere interaction at local to sub-continental scales in the dry season. Meso-scale models were run in 1-D and 3-D mode for the area of Rondonia State, Brazil. The important conclusions are that without calibration it is difficult to model the energy partitioning of pasture; modelling that of forest is easier due to the absence of a strong moisture deficit signal. The simulation of the boundary layer above forest is good, above deforested areas (pasture poor. The models' underestimate of the temperature of the boundary layer is likely to be caused by the neglect of the radiative effects of aerosols caused by biomass burning, but other factors such as lack of sufficient entrainment in the model at the mixed layer top may also contribute. The Andes generate patterns of subsidence and gravity waves, the effects of which are felt far into the Rondonian area The results show that the picture presented by GCM modelling studies may need to be balanced by an increased understanding of what happens at the meso-scale. The results are used to identify key measurements for the LBA atmospheric meso-scale campaign needed to improve the model simulations. Similar modelling studies are proposed for the wet season in Rondonia, when convection plays a major role.Key words. Atmospheric composition and structure (aerosols and particles; biosphere-atmosphere interactions · Meterology and atmospheric dynamics (mesoscale meterology

  7. Meso-scale effects of tropical deforestation in Amazonia: preparatory LBA modelling studies

    Directory of Open Access Journals (Sweden)

    A. J. Dolman

    Full Text Available As part of the preparation for the Large-Scale Biosphere Atmosphere Experiment in Amazonia, a meso-scale modelling study was executed to highlight deficiencies in the current understanding of land surface atmosphere interaction at local to sub-continental scales in the dry season. Meso-scale models were run in 1-D and 3-D mode for the area of Rondonia State, Brazil. The important conclusions are that without calibration it is difficult to model the energy partitioning of pasture; modelling that of forest is easier due to the absence of a strong moisture deficit signal. The simulation of the boundary layer above forest is good, above deforested areas (pasture poor. The models' underestimate of the temperature of the boundary layer is likely to be caused by the neglect of the radiative effects of aerosols caused by biomass burning, but other factors such as lack of sufficient entrainment in the model at the mixed layer top may also contribute. The Andes generate patterns of subsidence and gravity waves, the effects of which are felt far into the Rondonian area The results show that the picture presented by GCM modelling studies may need to be balanced by an increased understanding of what happens at the meso-scale. The results are used to identify key measurements for the LBA atmospheric meso-scale campaign needed to improve the model simulations. Similar modelling studies are proposed for the wet season in Rondonia, when convection plays a major role.

    Key words. Atmospheric composition and structure (aerosols and particles; biosphere-atmosphere interactions · Meterology and atmospheric dynamics (mesoscale meterology

  8. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  9. Anisotropic modulus stabilisation. Strings at LHC scales with micron-sized extra dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Cicoli, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Burgess, C.P. [McMaster Univ., Hamilton (Canada). Dept. of Physics and Astronomy; Perimeter Institute for Theoretical Physics, Waterloo (Canada); Quevedo, F. [Cambridge Univ. (United Kingdom). DAMTP/CMS; Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)

    2011-04-15

    We construct flux-stabilised Type IIB string compactifications whose extra dimensions have very different sizes, and use these to describe several types of vacua with a TeV string scale. Because we can access regimes where two dimensions are hierarchically larger than the other four, we find examples where two dimensions are micron-sized while the other four are at the weak scale in addition to more standard examples with all six extra dimensions equally large. Besides providing ultraviolet completeness, the phenomenology of these models is richer than vanilla large-dimensional models in several generic ways: (i) they are supersymmetric, with supersymmetry broken at sub-eV scales in the bulk but only nonlinearly realised in the Standard Model sector, leading to no MSSM superpartners for ordinary particles and many more bulk missing-energy channels, as in supersymmetric large extra dimensions (SLED); (ii) small cycles in the more complicated extra-dimensional geometry allow some KK states to reside at TeV scales even if all six extra dimensions are nominally much larger; (iii) a rich spectrum of string and KK states at TeV scales; and (iv) an equally rich spectrum of very light moduli exist having unusually small (but technically natural) masses, with potentially interesting implications for cosmology and astrophysics that nonetheless evade new-force constraints. The hierarchy problem is solved in these models because the extra-dimensional volume is naturally stabilised at exponentially large values: the extra dimensions are Calabi-Yau geometries with a 4D K3-fibration over a 2D base, with moduli stabilised within the well-established LARGE-Volume scenario. The new technical step is the use of poly-instanton corrections to the superpotential (which, unlike for simpler models, are present on K3-fibered Calabi-Yau compactifications) to obtain a large hierarchy between the sizes of different dimensions. For several scenarios we identify the low-energy spectrum and

  10. Anisotropic modulus stabilisation. Strings at LHC scales with micron-sized extra dimensions

    International Nuclear Information System (INIS)

    Cicoli, M.; Burgess, C.P.; Quevedo, F.

    2011-04-01

    We construct flux-stabilised Type IIB string compactifications whose extra dimensions have very different sizes, and use these to describe several types of vacua with a TeV string scale. Because we can access regimes where two dimensions are hierarchically larger than the other four, we find examples where two dimensions are micron-sized while the other four are at the weak scale in addition to more standard examples with all six extra dimensions equally large. Besides providing ultraviolet completeness, the phenomenology of these models is richer than vanilla large-dimensional models in several generic ways: (i) they are supersymmetric, with supersymmetry broken at sub-eV scales in the bulk but only nonlinearly realised in the Standard Model sector, leading to no MSSM superpartners for ordinary particles and many more bulk missing-energy channels, as in supersymmetric large extra dimensions (SLED); (ii) small cycles in the more complicated extra-dimensional geometry allow some KK states to reside at TeV scales even if all six extra dimensions are nominally much larger; (iii) a rich spectrum of string and KK states at TeV scales; and (iv) an equally rich spectrum of very light moduli exist having unusually small (but technically natural) masses, with potentially interesting implications for cosmology and astrophysics that nonetheless evade new-force constraints. The hierarchy problem is solved in these models because the extra-dimensional volume is naturally stabilised at exponentially large values: the extra dimensions are Calabi-Yau geometries with a 4D K3-fibration over a 2D base, with moduli stabilised within the well-established LARGE-Volume scenario. The new technical step is the use of poly-instanton corrections to the superpotential (which, unlike for simpler models, are present on K3-fibered Calabi-Yau compactifications) to obtain a large hierarchy between the sizes of different dimensions. For several scenarios we identify the low-energy spectrum and

  11. Achieving scale-independent land-surface flux estimates - Application of the Multiscale Parameter Regionalization (MPR) to the Noah-MP land-surface model across the contiguous USA

    Science.gov (United States)

    Thober, S.; Mizukami, N.; Samaniego, L. E.; Attinger, S.; Clark, M. P.; Cuntz, M.

    2016-12-01

    Land-surface models use a variety of process representations to calculate terrestrial energy, water and biogeochemical fluxes. These process descriptions are usually derived from point measurements but are scaled to much larger resolutions in applications that range from about 1 km in catchment hydrology to 100 km in climate modelling. Both, hydrologic and climate models are nowadays run on different spatial resolutions, using the exact same land surface representations. A fundamental criterion for the physical consistency of land-surface simulations across scales is that a flux estimated over a given area is independent of the spatial model resolution (i.e., the flux-matching criterion). The Noah-MP land surface model considers only one soil and land cover type per model grid cell without any representation of subgrid variability, implying a weak flux-matching. A fractional approach simulates subgrid variability but it requires a higher computational demand than using effective parameters and it is used only for land cover in current land surface schemes. A promising approach to derive scale-independent parameters is the Multiscale Parameter Regionalization (MPR) technique, which consists of two steps: first, it applies transfer functions directly to high-resolution data (such as 100 m soil maps) to derive high-resolution model parameter fields, acknowledging the full subgrid variability. Second, it upscales these high-resolution parameter fields to the model resolution by using appropriate upscaling operators. MPR has shown to improve substantially the scalability of hydrologic models. Here, we apply the MPR technique to the Noah-MP land-surface model for a large sample of basins distributed across the contiguous USA. Specifically, we evaluate the flux-matching criterion for several hydrologic fluxes such as evapotranspiration and total runoff at scales ranging from 3 km to 48 km. We also investigate a p-norm scaling operator that goes beyond the current

  12. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DEFF Research Database (Denmark)

    King, Zachary A.; Lu, Justin; Dräger, Andreas

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repo...

  13. Monitoring strategies and scale appropriate hydrologic and biogeochemical modelling for natural resource management

    DEFF Research Database (Denmark)

    Bende-Michl, Ulrike; Volk, Martin; Harmel, Daren

    2011-01-01

    This short communication paper presents recommendations for developing scale-appropriate monitoring and modelling strategies to assist decision making in natural resource management (NRM). These ideas presented here were discussed in the session (S5) ‘Monitoring strategies and scale...... and communication between researcher and model developer on the one side, and natural resource managers and the model users on the other side to increase knowledge in: 1) the limitations and uncertainties of current monitoring and modelling strategies, 2) scale-dependent linkages between monitoring and modelling...

  14. Aroma: a larger than life experience?

    Directory of Open Access Journals (Sweden)

    Delphine DE SWARDT

    2015-12-01

    Full Text Available Aroma is today an essential part of our diet. Often used to reinforce the initial neutral taste of the food produced on an industrial scale, it is sometimes the main course, at the core of many edible products. First thought as accessory, it now takes the lead. From this observation and through the review of examples of the food industry, this article puts forward the hypothesis that the aroma supplants the food –in the relation of resemblance between the original model and its representation, which falls under the inculcation – and eclipses it. Potentially strong on the palate, it is a promise of intense experience. This is particularly true in the case of flavors without pre-established references. Pure abstract aromatic constructions allow greater freedom of projection, and foster discursive emphasis. In these cases, the taste alone, uncorrelated with prerogatives of nutrition, becomes the support of a hyperesthesic experience.

  15. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  16. Modeling Feedbacks Between Individual Human Decisions and Hydrology Using Interconnected Physical and Social Models

    Science.gov (United States)

    Murphy, J.; Lammers, R. B.; Proussevitch, A. A.; Ozik, J.; Altaweel, M.; Collier, N. T.; Alessa, L.; Kliskey, A. D.

    2014-12-01

    The global hydrological cycle intersects with human decision making at multiple scales, from dams and irrigation works to the taps in individuals' homes. Residential water consumers are commonly encouraged to conserve; these messages are heard against a background of individual values and conceptions about water quality, uses, and availability. The degree to which these values impact the larger-hydrological dynamics, the way that changes in those values have impacts on the hydrological cycle through time, and the feedbacks by which water availability and quality in turn shape those values, are not well explored. To investigate this domain we employ a global-scale water balance model (WBM) coupled with a social-science-grounded agent-based model (ABM). The integration of a hydrological model with an agent-based model allows us to explore driving factors in the dynamics in coupled human-natural systems. From the perspective of the physical hydrologist, the ABM offers a richer means of incorporating the human decisions that drive the hydrological system; from the view of the social scientist, a physically-based hydrological model allows the decisions of the agents to play out against constraints faithful to the real world. We apply the interconnected models to a study of Tucson, Arizona, USA, and its role in the larger Colorado River system. Our core concept is Technology-Induced Environmental Distancing (TIED), which posits that layers of technology can insulate consumers from direct knowledge of a resource. In Tucson, multiple infrastructure and institutional layers have arguably increased the conceptual distance between individuals and their water supply, offering a test case of the TIED framework. Our coupled simulation allows us to show how the larger system transforms a resource with high temporal and spatial variability into a consumer constant, and the effects of this transformation on the regional system. We use this to explore how pricing, messaging, and

  17. Scale genesis and gravitational wave in a classically scale invariant extension of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Kubo, Jisuke [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Yamada, Masatoshi [Department of Physics, Kyoto University,Kyoto 606-8502 (Japan); Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, 69120 Heidelberg (Germany)

    2016-12-01

    We assume that the origin of the electroweak (EW) scale is a gauge-invariant scalar-bilinear condensation in a strongly interacting non-abelian gauge sector, which is connected to the standard model via a Higgs portal coupling. The dynamical scale genesis appears as a phase transition at finite temperature, and it can produce a gravitational wave (GW) background in the early Universe. We find that the critical temperature of the scale phase transition lies above that of the EW phase transition and below few O(100) GeV and it is strongly first-order. We calculate the spectrum of the GW background and find the scale phase transition is strong enough that the GW background can be observed by DECIGO.

  18. Quantum dynamics via Planck-scale-stepped action-carrying 'Graph Paths'

    CERN Document Server

    Chew, Geoffrey Foucar

    2003-01-01

    A divergence-free, parameter-free, path-based discrete-time quantum dynamics is designed to not only enlarge the achievements of general relativity and the standard particle model, by approximations at spacetime scales far above Planck scale while far below Hubble scale, but to allow tackling of hitherto inaccessible questions. ''Path space'' is larger than and precursor to Hilbert-space basis. The wave-function-propagating paths are action-carrying structured graphs-cubic and quartic structured vertices connected by structured ''fermionic'' or ''bosonic'' ''particle'' and ''nonparticle'' arcs. A Planck-scale path step determines the gravitational constant while controlling all graph structure. The basis of the theory's (zero-rest-mass) elementary-particle Hilbert space (which includes neither gravitons nor scalar bosons) resides in particle arcs. Nonparticle arcs within a path are responsible for energy and rest mass.

  19. Development of the Artistic Supervision Model Scale (ASMS)

    Science.gov (United States)

    Kapusuzoglu, Saduman; Dilekci, Umit

    2017-01-01

    The purpose of the study is to develop the Artistic Supervision Model Scale in accordance with the perception of inspectors and the elementary and secondary school teachers on artistic supervision. The lack of a measuring instrument related to the model of artistic supervision in the field of literature reveals the necessity of such study. 290…

  20. Testing of materials and scale models for impact limiters

    International Nuclear Information System (INIS)

    Maji, A.K.; Satpathi, D.; Schryer, H.L.

    1991-01-01

    Aluminum Honeycomb and Polyurethane foam specimens were tested to obtain experimental data on the material's behavior under different loading conditions. This paper reports the dynamic tests conducted on the materials and on the design and testing of scale models made out of these open-quotes Impact Limiters,close quotes as they are used in the design of transportation casks. Dynamic tests were conducted on a modified Charpy Impact machine with associated instrumentation, and compared with static test results. A scale model testing setup was designed and used for preliminary tests on models being used by current designers of transportation casks. The paper presents preliminary results of the program. Additional information will be available and reported at the time of presentation of the paper

  1. A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model

    Science.gov (United States)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data

  2. Y-Scaling in a simple quark model

    International Nuclear Information System (INIS)

    Kumano, S.; Moniz, E.J.

    1988-01-01

    A simple quark model is used to define a nuclear pair model, that is, two composite hadrons interacting only through quark interchange and bound in an overall potential. An ''equivalent'' hadron model is developed, displaying an effective hadron-hadron interaction which is strongly repulsive. We compare the effective hadron model results with the exact quark model observables in the kinematic region of large momentum transfer, small energy transfer. The nucleon reponse function in this y-scaling region is, within the traditional frame work sensitive to the nucleon momentum distribution at large momentum. We find a surprizingly small effect of hadron substructure. Furthermore, we find in our model that a simple parametrization of modified hadron size in the bound state, motivated by the bound quark momentum distribution, is not a useful way to correlate different observables

  3. Appropriatie spatial scales to achieve model output uncertainty goals

    NARCIS (Netherlands)

    Booij, Martijn J.; Melching, Charles S.; Chen, Xiaohong; Chen, Yongqin; Xia, Jun; Zhang, Hailun

    2008-01-01

    Appropriate spatial scales of hydrological variables were determined using an existing methodology based on a balance in uncertainties from model inputs and parameters extended with a criterion based on a maximum model output uncertainty. The original methodology uses different relationships between

  4. Report on US-DOE/OHER Task Group on modelling and scaling

    International Nuclear Information System (INIS)

    Mewhinney, J.A.; Griffith, W.C.

    1989-01-01

    In early 1986, the DOE/OHER Task Group on Modeling and Scaling was formed. Membership on the Task Group is drawn from staff of several laboratories funded by the United States Department of Energy, Office of Health and Environmental Research. The primary goal of the Task Group is to promote cooperation among the laboratories in analysing mammalian radiobiology studies with emphasis on studies that used beagle dogs in linespan experiments. To assist in defining the status of modelling and scaling in animal data, the Task Group served as the programme committee for the 26th Hanford Life Sciences symposium entitled Modeling for Scaling to Man held in October 1987. This symposium had over 60 oral presentations describing current research in dosimetric, pharmacokinetic, and dose-response modelling and scaling of results from animal studies to humans. A summary of the highlights of this symposium is presented. The Task Group also is in the process of developing recommendations for analyses of results obtained from dog lifespan studies. The goal is to provide as many comparisons as possible between these studies and to scale the results to humans to strengthen limited epidemiological data on exposures of humans to radiation. Several methods are discussed. (author)

  5. Experimental modelling at the grain scale of bedload on steep slopes

    Science.gov (United States)

    Fonstad, M. A.; Blanton, P.

    2011-12-01

    environments to likely organism locations in Scott Creek, Oregon. The scale that macroinvertebrates and salmon sense their environment is in the centimeter to decimeter range, and we use structure from motion and 2D velocity modeling approaches to produce digital physical environments in which our model agents can interacts. By hypothesizing rules of agent movement and interactions, the histories of digital organism interactions can produce maps of habitat preference that include both the physical habitat characteristics and the likely patterns due to organism interactions. One of the challenges in the future will be to scale these approaches up to larger areas and a more diverse set of ecosystem interactions. Validation of agent-based models also poses a challenge in river environments with diverse physical characteristics and histories. By combining agent-based and high-resolution approaches, many stream ecology and fluvial theories might be much more easily tested, such as whether or not habitat heterogeneity drives biodiversity in river systems.

  6. 9 m side drop test of scale model

    International Nuclear Information System (INIS)

    Ku, Jeong-Hoe; Chung, Seong-Hwan; Lee, Ju-Chan; Seo, Ki-Seog

    1993-01-01

    A type B(U) shipping cask had been developed in KAERI for transporting PWR spent fuel. Since the cask is to transport spent PWR fuel, it must be designed to meet all of the structural requirements specified in domestic packaging regulations and IAEA safety series No.6. This paper describes the side drop testing of a one - third scale model cask. The crush and deformations of the shock absorbing covers directly control the deceleration experiences of the cask during the 9 m side drop impact. The shock absorbing covers greatly mitigated the inertia forces of the cask body due to the side drop impact. Compared with the side drop test and finite element analysis, it was verified that the 1/3 scale model cask maintain its structural integrity of the model cask under the side drop impact. The test and analysis results could be used as the basic data to evaluate the structural integrity of the real cask. (J.P.N.)

  7. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    Science.gov (United States)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1

  8. URBAN MORPHOLOGY FOR HOUSTON TO DRIVE MODELS-3/CMAQ AT NEIGHBORHOOD SCALES

    Science.gov (United States)

    Air quality simulation models applied at various horizontal scales require different degrees of treatment in the specifications of the underlying surfaces. As we model neighborhood scales ( 1 km horizontal grid spacing), the representation of urban morphological structures (e....

  9. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  10. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    2012-01-01

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  11. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  12. Pesticide fate on catchment scale: conceptual modelling of stream CSIA data

    Science.gov (United States)

    Lutz, Stefanie R.; van der Velde, Ype; Elsayed, Omniea F.; Imfeld, Gwenaël; Lefrancq, Marie; Payraudeau, Sylvain; van Breukelen, Boris M.

    2017-10-01

    Compound-specific stable isotope analysis (CSIA) has proven beneficial in the characterization of contaminant degradation in groundwater, but it has never been used to assess pesticide transformation on catchment scale. This study presents concentration and carbon CSIA data of the herbicides S-metolachlor and acetochlor from three locations (plot, drain, and catchment outlets) in a 47 ha agricultural catchment (Bas-Rhin, France). Herbicide concentrations at the catchment outlet were highest (62 µg L-1) in response to an intense rainfall event following herbicide application. Increasing δ13C values of S-metolachlor and acetochlor by more than 2 ‰ during the study period indicated herbicide degradation. To assist the interpretation of these data, discharge, concentrations, and δ13C values of S-metolachlor were modelled with a conceptual mathematical model using the transport formulation by travel-time distributions. Testing of different model setups supported the assumption that degradation half-lives (DT50) increase with increasing soil depth, which can be straightforwardly implemented in conceptual models using travel-time distributions. Moreover, model calibration yielded an estimate of a field-integrated isotopic enrichment factor as opposed to laboratory-based assessments of enrichment factors in closed systems. Thirdly, the Rayleigh equation commonly applied in groundwater studies was tested by our model for its potential to quantify degradation on catchment scale. It provided conservative estimates on the extent of degradation as occurred in stream samples. However, largely exceeding the simulated degradation within the entire catchment, these estimates were not representative of overall degradation on catchment scale. The conceptual modelling approach thus enabled us to upscale sample-based CSIA information on degradation to the catchment scale. Overall, this study demonstrates the benefit of combining monitoring and conceptual modelling of concentration

  13. Roadmap for Scaling and Multifractals in Geosciences: still a long way to go ?

    Science.gov (United States)

    Schertzer, Daniel; Lovejoy, Shaun

    2010-05-01

    The interest in scale symmetries (scaling) in Geosciences has never lessened since the first pioneering EGS session on chaos and fractals 22 years ago. The corresponding NP activities have been steadily increasing, covering a wider and wider diversity of geophysical phenomena and range of space-time scales. Whereas interest was initially largely focused on atmospheric turbulence, rain and clouds at small scales, it has quickly broadened to much larger scales and to much wider scale ranges, to include ocean sciences, solid earth and space physics. Indeed, the scale problem being ubiquitous in Geosciences, it is indispensable to share the efforts and the resulting knowledge as much as possible. There have been numerous achievements which have followed from the exploration of larger and larger datasets with finer and finer resolutions, from both modelling and theoretical discussions, particularly on formalisms for intermittency, anisotropy and scale symmetry, multiple scaling (multifractals) vs. simple scaling,. We are now way beyond the early pioneering but tentative attempts using crude estimates of unique scaling exponents to bring some credence to the fact that scale symmetries are key to most nonlinear geoscience problems. Nowadays, we need to better demonstrate that scaling brings effective solutions to geosciences and therefore to society. A large part of the answer corresponds to our capacity to create much more universal and flexible tools to multifractally analyse in straightforward and reliable manners complex and complicated systems such as the climate. Preliminary steps in this direction are already quite encouraging: they show that such approaches explain both the difficulty of classical techniques to find trends in climate scenarios (particularly for extremes) and resolve them with the help of scaling estimators. The question of the reliability and accuracy of these methods is not trivial. After discussing these important, but rather short term issues

  14. Exploring the link between multiscale entropy and fractal scaling behavior in near-surface wind.

    Directory of Open Access Journals (Sweden)

    Miguel Nogueira

    Full Text Available The equivalency between the power law behavior of Multiscale Entropy (MSE and of power spectra opens a promising path for interpretation of complex time-series, which is explored here for the first time for atmospheric fields. Additionally, the present manuscript represents a new independent empirical validation of such relationship, the first one for the atmosphere. The MSE-fractal relationship is verified for synthetic fractal time-series covering the full range of exponents typically observed in the atmosphere. It is also verified for near-surface wind observations from anemometers and CFSR re-analysis product. The results show a ubiquitous β ≈ 5/3 behavior inside the inertial range. A scaling break emerges at scales around a few seconds, with a tendency towards 1/f noise. The presence, extension and fractal exponent of this intermediate range are dependent on the particular surface forcing and atmospheric conditions. MSE shows an identical picture which is consistent with the turbulent energy cascade model: viscous dissipation at the small-scale end of the inertial range works as an information sink, while at the larger (energy-containing scales the multiple forcings in the boundary layer act as widespread information sources. Another scaling transition occurs at scales around 1-10 days, with an abrupt flattening of the spectrum. MSE shows that this transition corresponds to a maximum of the new information introduced, occurring at the time-scales of the synoptic features that dominate weather patterns. At larger scales, a scaling regime with flatter slopes emerges extending to scales larger than 1 year. MSE analysis shows that the amount of new information created decreases with increasing scale in this low-frequency regime. Additionally, in this region the energy injection is concentrated in two large energy peaks: daily and yearly time-scales. The results demonstrate that the superposition of these periodic signals does not destroy the

  15. Exploring the link between multiscale entropy and fractal scaling behavior in near-surface wind.

    Science.gov (United States)

    Nogueira, Miguel

    2017-01-01

    The equivalency between the power law behavior of Multiscale Entropy (MSE) and of power spectra opens a promising path for interpretation of complex time-series, which is explored here for the first time for atmospheric fields. Additionally, the present manuscript represents a new independent empirical validation of such relationship, the first one for the atmosphere. The MSE-fractal relationship is verified for synthetic fractal time-series covering the full range of exponents typically observed in the atmosphere. It is also verified for near-surface wind observations from anemometers and CFSR re-analysis product. The results show a ubiquitous β ≈ 5/3 behavior inside the inertial range. A scaling break emerges at scales around a few seconds, with a tendency towards 1/f noise. The presence, extension and fractal exponent of this intermediate range are dependent on the particular surface forcing and atmospheric conditions. MSE shows an identical picture which is consistent with the turbulent energy cascade model: viscous dissipation at the small-scale end of the inertial range works as an information sink, while at the larger (energy-containing) scales the multiple forcings in the boundary layer act as widespread information sources. Another scaling transition occurs at scales around 1-10 days, with an abrupt flattening of the spectrum. MSE shows that this transition corresponds to a maximum of the new information introduced, occurring at the time-scales of the synoptic features that dominate weather patterns. At larger scales, a scaling regime with flatter slopes emerges extending to scales larger than 1 year. MSE analysis shows that the amount of new information created decreases with increasing scale in this low-frequency regime. Additionally, in this region the energy injection is concentrated in two large energy peaks: daily and yearly time-scales. The results demonstrate that the superposition of these periodic signals does not destroy the underlying

  16. A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media

    KAUST Repository

    Chen, Huangxin; Sun, Shuyu

    2016-01-01

    scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved

  17. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...

  18. Is Parental Involvement Lower at Larger Schools?

    Science.gov (United States)

    Walsh, Patrick

    2010-01-01

    Parents who volunteer, or who lobby for improvements in school quality, are generally seen as providing a school-wide public good. If so, straightforward public-good theory predicts that free-riding will reduce average involvement at larger schools. This study uses longitudinal data to follow families over time, as their children move from middle…

  19. A dynamic global-coefficient mixed subgrid-scale model for large-eddy simulation of turbulent flows

    International Nuclear Information System (INIS)

    Singh, Satbir; You, Donghyun

    2013-01-01

    Highlights: ► A new SGS model is developed for LES of turbulent flows in complex geometries. ► A dynamic global-coefficient SGS model is coupled with a scale-similarity model. ► Overcome some of difficulties associated with eddy-viscosity closures. ► Does not require averaging or clipping of the model coefficient for stabilization. ► The predictive capability is demonstrated in a number of turbulent flow simulations. -- Abstract: A dynamic global-coefficient mixed subgrid-scale eddy-viscosity model for large-eddy simulation of turbulent flows in complex geometries is developed. In the present model, the subgrid-scale stress is decomposed into the modified Leonard stress, cross stress, and subgrid-scale Reynolds stress. The modified Leonard stress is explicitly computed assuming a scale similarity, while the cross stress and the subgrid-scale Reynolds stress are modeled using the global-coefficient eddy-viscosity model. The model coefficient is determined by a dynamic procedure based on the global-equilibrium between the subgrid-scale dissipation and the viscous dissipation. The new model relieves some of the difficulties associated with an eddy-viscosity closure, such as the nonalignment of the principal axes of the subgrid-scale stress tensor and the strain rate tensor and the anisotropy of turbulent flow fields, while, like other dynamic global-coefficient models, it does not require averaging or clipping of the model coefficient for numerical stabilization. The combination of the global-coefficient eddy-viscosity model and a scale-similarity model is demonstrated to produce improved predictions in a number of turbulent flow simulations

  20. Multi-scale, multi-model assessment of projected land allocation

    Science.gov (United States)

    Vernon, C. R.; Huang, M.; Chen, M.; Calvin, K. V.; Le Page, Y.; Kraucunas, I.

    2017-12-01

    Effects of land use and land cover change (LULCC) on climate are generally classified into two scale-dependent processes: biophysical and biogeochemical. An extensive amount of research has been conducted related to the impact of each process under alternative climate change futures. However, these studies are generally focused on the impacts of a single process and fail to bridge the gap between sector-driven scale dependencies and any associated dynamics. Studies have been conducted to better understand the relationship of these processes but their respective scale has not adequately captured overall interdependencies between land surface changes and changes in other human-earth systems (e.g., energy, water, economic, etc.). There has also been considerable uncertainty surrounding land use land cover downscaling approaches due to scale dependencies. Demeter, a land use land cover downscaling and change detection model, was created to address this science gap. Demeter is an open-source model written in Python that downscales zonal land allocation projections to the gridded resolution of a user-selected spatial base layer (e.g., MODIS, NLCD, EIA CCI, etc.). Demeter was designed to be fully extensible to allow for module inheritance and replacement for custom research needs, such as flexible IO design to facilitate the coupling of Earth system models (e.g., the Accelerated Climate Modeling for Energy (ACME) and the Community Earth System Model (CESM)) to integrated assessment models (e.g., the Global Change Assessment Model (GCAM)). In this study, we first assessed the sensitivity of downscaled LULCC scenarios at multiple resolutions from Demeter to its parameters by comparing them to historical LULC change data. "Optimal" values of key parameters for each region were identified and used to downscale GCAM-based future scenarios consistent with those in the Land Use Model Intercomparison Project (LUMIP). Demeter-downscaled land use scenarios were then compared to the

  1. Power system models - A description of power markets and outline of market modelling in Wilmar

    DEFF Research Database (Denmark)

    Meibom, Peter; Morthorst, Poul Erik; Nielsen, Lars Henrik

    2004-01-01

    The aim of the Wilmar project is to investigate technical and economical problems related to large-scale deployment of renewable sources and to develop a modelling tool that can handle system simulations for a larger geographical region with anInternational power exchange. Wilmar is an abbreviati...... description of the power market models usedin Wilmar is given in the second part, though the mathematical presentations of the models are left out of this report and will be treated in a later publication from the project.......The aim of the Wilmar project is to investigate technical and economical problems related to large-scale deployment of renewable sources and to develop a modelling tool that can handle system simulations for a larger geographical region with anInternational power exchange. Wilmar is an abbreviation...... of “Wind Power Integration in Liberalised Electricity Markets”. The project was started in 2002 and is funded by the EU’s 5th Research programme on energy and environment. Risø National Laboratory isco-ordinator of the project and partners include SINTEF, Kungliga Tekniska Högskola, University of Stuttgart...

  2. The Drell-Yan process in a non-scaling parton model

    International Nuclear Information System (INIS)

    Polkinghorne, J.C.

    1976-01-01

    The Drell-Yan process of heavy lepton pair production in hadronic collisions is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the Drell-Yan structure function exhibit a simple scale breaking behaviour closely related to the behaviour of moments of the deep inelastic structure function of the model. The extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. (Auth.)

  3. N=2→0 super no-scale models and moduli quantum stability

    Directory of Open Access Journals (Sweden)

    Costas Kounnas

    2017-06-01

    Full Text Available We consider a class of heterotic N=2→0 super no-scale Z2-orbifold models. An appropriate stringy Scherk–Schwarz supersymmetry breaking induces tree level masses to all massless bosons of the twisted hypermultiplets and therefore stabilizes all twisted moduli. At high supersymmetry breaking scale, the tachyons that occur in the N=4→0 parent theories are projected out, and no Hagedorn-like instability takes place in the N=2→0 models (for small enough marginal deformations. At low supersymmetry breaking scale, the stability of the untwisted moduli is studied at the quantum level by taking into account both untwisted and twisted contributions to the 1-loop effective potential. The latter depends on the specific branch of the gauge theory along which the background can be deformed. We derive its expression in terms of all classical marginal deformations in the pure Coulomb phase, and in some mixed Coulomb/Higgs phases. In this class of models, the super no-scale condition requires having at the massless level equal numbers of untwisted bosonic and twisted fermionic degrees of freedom. Finally, we show that N=1→0 super no-scale models are obtained by implementing a second Z2 orbifold twist on N=2→0 super no-scale Z2-orbifold models.

  4. Landslide scaling and magnitude-frequency distribution (Invited)

    Science.gov (United States)

    Stark, C. P.; Guzzetti, F.

    2009-12-01

    Landslide-driven erosion is controlled by the scale and frequency of slope failures and by the consequent fluxes of debris off the hillslopes. Here I focus on the magnitude-frequency part of the process and develop a theory of initial slope failure and debris mobilization that reproduces the heavy-tailed distributions (PDFs) observed for landslide source areas and volumes. Landslide rupture propagation is treated as a quasi-static, non-inertial process of simplified elastoplastic deformation with strain weakening; debris runout is not considered. The model tracks the stochastically evolving imbalance of frictional, cohesive, and body forces across a failing slope, and uses safety-factor concepts to convert the evolving imbalance into a series of incremental rupture growth or arrest probabilities. A single rupture is simulated with a sequence of weighted ``coin tosses'' with weights set by the growth probabilities. Slope failure treated in this stochastic way is a survival process that generates asymptotically power-law-tail PDFs of area and volume for rock and debris slides; predicted scaling exponents are consistent with analyses of landslide inventories. The primary control on the shape of the model PDFs is the relative importance of cohesion over friction in setting slope stability: the scaling of smaller, shallower failures, and the size of the most common landslide volumes, are the result of the low cohesion of soil and regolith, whereas the negative power-law tail scaling for larger failures is tied to the greater cohesion of bedrock. The debris budget may be dominated by small or large landslides depending on the scaling of both the PDF and of the depth-length relation. I will present new model results that confirm the hypothesis that depth-length scaling is linear. Model PDF of landslide volumes.

  5. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  6. An Efficient Two-Scale Hybrid Embedded Fracture Model for Shale Gas Simulation

    KAUST Repository

    Amir, Sahar Z.

    2016-12-27

    Natural and hydraulic fractures existence and state differs on a reservoir-by-reservoir or even on a well-by-well basis leading to the necessity of exploring the flow regimes variations with respect to the diverse fracture-network shapes forged. Conventional Dual-Porosity Dual-Permeability (DPDP) schemes are not adequate to model such complex fracture-network systems. To overcome this difficulty, in this paper, an iterative Hybrid Embedded multiscale (two-scale) Fracture model (HEF) is applied on a derived fit-for-purpose shale gas model. The HEF model involves splitting the fracture computations into two scales: 1) fine-scale solves for the flux exchange parameter within each grid cell; 2) coarse-scale solves for the pressure applied to the domain grid cells using the flux exchange parameter computed at each grid cell from the fine-scale. After that, the D dimensions matrix pressure and the (D-1) lower dimensional fracture pressure are solved as a system to apply the matrix-fracture coupling. HEF model combines the DPDP overlapping continua concept, the DFN lower dimensional fractures concept, the HFN hierarchical fracture concept, and the CCFD model simplicity. As for the fit-for-purpose shale gas model, various fit-for-purpose shale gas models can be derived using any set of selected properties plugged in one of the most popularly used proposed literature models as shown in the appendix. Also, this paper shows that shale extreme low permeability cause flow behavior to be dominated by the structure and magnitude of high permeability fractures.

  7. An Efficient Two-Scale Hybrid Embedded Fracture Model for Shale Gas Simulation

    KAUST Repository

    Amir, Sahar Z.; Sun, Shuyu

    2016-01-01

    Natural and hydraulic fractures existence and state differs on a reservoir-by-reservoir or even on a well-by-well basis leading to the necessity of exploring the flow regimes variations with respect to the diverse fracture-network shapes forged. Conventional Dual-Porosity Dual-Permeability (DPDP) schemes are not adequate to model such complex fracture-network systems. To overcome this difficulty, in this paper, an iterative Hybrid Embedded multiscale (two-scale) Fracture model (HEF) is applied on a derived fit-for-purpose shale gas model. The HEF model involves splitting the fracture computations into two scales: 1) fine-scale solves for the flux exchange parameter within each grid cell; 2) coarse-scale solves for the pressure applied to the domain grid cells using the flux exchange parameter computed at each grid cell from the fine-scale. After that, the D dimensions matrix pressure and the (D-1) lower dimensional fracture pressure are solved as a system to apply the matrix-fracture coupling. HEF model combines the DPDP overlapping continua concept, the DFN lower dimensional fractures concept, the HFN hierarchical fracture concept, and the CCFD model simplicity. As for the fit-for-purpose shale gas model, various fit-for-purpose shale gas models can be derived using any set of selected properties plugged in one of the most popularly used proposed literature models as shown in the appendix. Also, this paper shows that shale extreme low permeability cause flow behavior to be dominated by the structure and magnitude of high permeability fractures.

  8. A two-scale roughness model for the gloss of coated paper

    Science.gov (United States)

    Elton, N. J.

    2008-08-01

    A model for gloss is developed for surfaces with two-scale random roughness where one scale lies in the wavelength region (microroughness) and the other in the geometrical optics limit (macroroughness). A number of important industrial materials such as coated and printed paper and some paints exhibit such two-scale rough surfaces. Scalar Kirchhoff theory is used to describe scattering in the wavelength region and a facet model used for roughness features much greater than the wavelength. Simple analytical expressions are presented for the gloss of surfaces with Gaussian, modified and intermediate Lorentzian distributions of surface slopes, valid for gloss at high angle of incidence. In the model, gloss depends only on refractive index, rms microroughness amplitude and the FWHM of the surface slope distribution, all of which may be obtained experimentally. Model predictions are compared with experimental results for a range of coated papers and gloss standards, and found to be in fair agreement within model limitations.

  9. Fluctuation scaling, Taylor's law, and crime.

    Directory of Open Access Journals (Sweden)

    Quentin S Hanley

    Full Text Available Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026 while burglary exhibited a greater exponent (α = 1.292 ± 0.029 indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs to 2.094 ± 0081 (Other Crimes. Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation.

  10. Fluctuation scaling, Taylor's law, and crime.

    Science.gov (United States)

    Hanley, Quentin S; Khatun, Suniya; Yosef, Amal; Dyer, Rachel-May

    2014-01-01

    Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026) while burglary exhibited a greater exponent (α = 1.292 ± 0.029) indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs) to 2.094 ± 0081 (Other Crimes). Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation.

  11. A model for allometric scaling of mammalian metabolism with ambient heat loss

    KAUST Repository

    Kwak, Ho Sang

    2016-02-02

    Background Allometric scaling, which represents the dependence of biological trait or process relates on body size, is a long-standing subject in biological science. However, there has been no study to consider heat loss to the ambient and an insulation layer representing mammalian skin and fur for the derivation of the scaling law of metabolism. Methods A simple heat transfer model is proposed to analyze the allometry of mammalian metabolism. The present model extends existing studies by incorporating various external heat transfer parameters and additional insulation layers. The model equations were solved numerically and by an analytic heat balance approach. Results A general observation is that the present heat transfer model predicted the 2/3 surface scaling law, which is primarily attributed to the dependence of the surface area on the body mass. External heat transfer effects introduced deviations in the scaling law, mainly due to natural convection heat transfer which becomes more prominent at smaller mass. These deviations resulted in a slight modification of the scaling exponent to a value smaller than 2/3. Conclusion The finding that additional radiative heat loss and the consideration of an outer insulation fur layer attenuate these deviation effects and render the scaling law closer to 2/3 provides in silico evidence for a functional impact of heat transfer mode on the allometric scaling law in mammalian metabolism.

  12. Imprint of thawing scalar fields on the large scale galaxy overdensity

    Science.gov (United States)

    Dinda, Bikash R.; Sen, Anjan A.

    2018-04-01

    We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and the perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from the Λ CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results from the enhancement of power from Λ CDM on small scales, whereas the inclusion of general relativistic (GR) corrections results in the suppression of power from Λ CDM on large scales. This can be useful to distinguish scalar field models from Λ CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to the difference in background expansion.

  13. Process based modelling of soil organic carbon redistribution on landscape scale

    Science.gov (United States)

    Schindewolf, Marcus; Seher, Wiebke; Amorim, Amorim S. S.; Maeso, Daniel L.; Jürgen, Schmidt

    2014-05-01

    Recent studies have pointed out the great importance of erosion processes in global carbon cycling. Continuous erosion leads to a massive loss of top soils including the loss of organic carbon accumulated over long time in the soil humus fraction. Lal (2003) estimates that 20% of the organic carbon eroded with top soils is emitted into atmosphere, due to aggregate breakdown and carbon mineralization during transport by surface runoff. Furthermore soil erosion causes a progressive decrease of natural soil fertility, since cation exchange capacity is associated with organic colloids. As a consequence the ability of soils to accumulate organic carbon is reduced proportionately to the drop in soil productivity. The colluvial organic carbon might be protected from further degradation depending on the depth of the colluvial cover and local decomposing conditions. Some colluvial sites can act as long-term sinks for organic carbon. The erosional transport of organic carbon may have an effect on the global carbon budget, however, it is uncertain, whether erosion is a sink or a source for carbon in the atmosphere. Another part of eroded soils and organic carbon will enter surface water bodies and might be transported over long distances. These sediments might be deposited in the riparian zones of river networks. Erosional losses of organic carbon will not pass over into atmosphere for the most part. But soil erosion limits substantially the potential of soils to sequester atmospheric CO2 by generating humus. The present study refers to lateral carbon flux modelling on landscape scale using the process based EROSION 3D soil loss simulation model, using existing parameter values. The selective nature of soil erosion results in a preferentially transport of fine particles while less carbonic larger particles remain on site. Consequently organic carbon is enriched in the eroded sediment compared to the origin soil. For this reason it is essential that EROSION 3D provides the

  14. SME routes for innovation collaboration with larger enterprises

    DEFF Research Database (Denmark)

    Brink, Tove

    2017-01-01

    The research in this paper reveals how Small and Medium-sized Enterprises (SMEs) can contribute to industry competiveness through collaboration with larger enterprises. The research is based on a longitudinal qualitative case study starting in 2011 with 10 SME offshore wind farm suppliers...... and follow-up interviews in 2013. The research continued with a second approach in 2014 within operation and maintenance (O&M) through focus group interviews and subsequent individual interviews with 20 enterprises and a seminar in May 2015. The findings reveal opportunities and challenges for SMEs according...... to three different routes for cooperation and collaboration with larger enterprises: demand-driven cooperation, supplier-driven cooperation and partnerdriven collaboration. The SME contribution to innovation and competiveness is different within the three routes and ranges from providing specific knowledge...

  15. Lepton Dipole Moments in Supersymmetric Low-Scale Seesaw Models

    CERN Document Server

    Ilakovac, Amon; Popov, Luka

    2014-01-01

    We study the anomalous magnetic and electric dipole moments of charged leptons in supersymmetric low-scale seesaw models with right-handed neutrino superfields. We consider a minimally extended framework of minimal supergravity, by assuming that CP violation originates from complex soft SUSY-breaking bilinear and trilinear couplings associated with the right-handed sneutrino sector. We present numerical estimates of the muon anomalous magnetic moment and the electron electric dipole moment (EDM), as functions of key model parameters, such as the Majorana mass scale mN and tan(\\beta). In particular, we find that the contributions of the singlet heavy neutrinos and sneutrinos to the electron EDM are naturally small in this model, of order 10^{-27} - 10^{-28} e cm, and can be probed in the present and future experiments.

  16. Advanced computational workflow for the multi-scale modeling of the bone metabolic processes.

    Science.gov (United States)

    Dao, Tien Tuan

    2017-06-01

    Multi-scale modeling of the musculoskeletal system plays an essential role in the deep understanding of complex mechanisms underlying the biological phenomena and processes such as bone metabolic processes. Current multi-scale models suffer from the isolation of sub-models at each anatomical scale. The objective of this present work was to develop a new fully integrated computational workflow for simulating bone metabolic processes at multi-scale levels. Organ-level model employs multi-body dynamics to estimate body boundary and loading conditions from body kinematics. Tissue-level model uses finite element method to estimate the tissue deformation and mechanical loading under body loading conditions. Finally, cell-level model includes bone remodeling mechanism through an agent-based simulation under tissue loading. A case study on the bone remodeling process located on the human jaw was performed and presented. The developed multi-scale model of the human jaw was validated using the literature-based data at each anatomical level. Simulation outcomes fall within the literature-based ranges of values for estimated muscle force, tissue loading and cell dynamics during bone remodeling process. This study opens perspectives for accurately simulating bone metabolic processes using a fully integrated computational workflow leading to a better understanding of the musculoskeletal system function from multiple length scales as well as to provide new informative data for clinical decision support and industrial applications.

  17. A hysteretic model considering Stribeck effect for small-scale magnetorheological damper

    Science.gov (United States)

    Zhao, Yu-Liang; Xu, Zhao-Dong

    2018-06-01

    Magnetorheological (MR) damper is an ideal semi-active control device for vibration suppression. The mechanical properties of this type of devices show strong nonlinear characteristics, especially the performance of the small-scale dampers. Therefore, developing an ideal model that can accurately describe the nonlinearity of such device is crucial to control design. In this paper, the dynamic characteristics of a small-scale MR damper developed by our research group is tested, and the Stribeck effect is observed in the low velocity region. Then, an improved model based on sigmoid model is proposed to describe this Stribeck effect observed in the experiment. After that, the parameters of this model are identified by genetic algorithms, and the mathematical relationship between these parameters and the input current, excitation frequency and amplitude is regressed. Finally, the predicted forces of the proposed model are validated with the experimental data. The results show that this model can well predict the mechanical properties of the small-scale damper, especially the Stribeck effect in the low velocity region.

  18. Direct Scaling of Leaf-Resolving Biophysical Models from Leaves to Canopies

    Science.gov (United States)

    Bailey, B.; Mahaffee, W.; Hernandez Ochoa, M.

    2017-12-01

    Recent advances in the development of biophysical models and high-performance computing have enabled rapid increases in the level of detail that can be represented by simulations of plant systems. However, increasingly detailed models typically require increasingly detailed inputs, which can be a challenge to accurately specify. In this work, we explore the use of terrestrial LiDAR scanning data to accurately specify geometric inputs for high-resolution biophysical models that enables direct up-scaling of leaf-level biophysical processes. Terrestrial LiDAR scans generate "clouds" of millions of points that map out the geometric structure of the area of interest. However, points alone are often not particularly useful in generating geometric model inputs, as additional data processing techniques are required to provide necessary information regarding vegetation structure. A new method was developed that directly reconstructs as many leaves as possible that are in view of the LiDAR instrument, and uses a statistical backfilling technique to ensure that the overall leaf area and orientation distribution matches that of the actual vegetation being measured. This detailed structural data is used to provide inputs for leaf-resolving models of radiation, microclimate, evapotranspiration, and photosynthesis. Model complexity is afforded by utilizing graphics processing units (GPUs), which allows for simulations that resolve scales ranging from leaves to canopies. The model system was used to explore how heterogeneity in canopy architecture at various scales affects scaling of biophysical processes from leaves to canopies.

  19. A new method for determination of most likely landslide initiation points and the evaluation of digital terrain model scale in terrain stability mapping

    Directory of Open Access Journals (Sweden)

    P. Tarolli

    2006-01-01

    Full Text Available This paper introduces a new approach for determining the most likely initiation points for landslides from potential instability mapped using a terrain stability model. This approach identifies the location with critical stability index from a terrain stability model on each downslope path from ridge to valley. Any measure of terrain stability may be used with this approach, which here is illustrated using results from SINMAP, and from simply taking slope as an index of potential instability. The relative density of most likely landslide initiation points within and outside mapped landslide scars provides a way to evaluate the effectiveness of a terrain stability measure, even when mapped landslide scars include run out zones, rather than just initiation locations. This relative density was used to evaluate the utility of high resolution terrain data derived from airborne laser altimetry (LIDAR for a small basin located in the Northeastern Region of Italy. Digital Terrain Models were derived from the LIDAR data for a range of grid cell sizes (from 2 to 50 m. We found appreciable differences between the density of most likely landslide initiation points within and outside mapped landslides with ratios as large as three or more with the highest ratios for a digital terrain model grid cell size of 10 m. This leads to two conclusions: (1 The relative density from a most likely landslide initiation point approach is useful for quantifying the effectiveness of a terrain stability map when mapped landslides do not or can not differentiate between initiation, runout, and depositional areas; and (2 in this study area, where landslides occurred in complexes that were sometimes more than 100 m wide, a digital terrain model scale of 10 m is optimal. Digital terrain model scales larger than 10 m result in loss of resolution that degrades the results, while for digital terrain model scales smaller than 10 m the physical processes responsible for triggering

  20. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto; Watson, James R.; Jö nsson, Bror; Gasol, Josep M.; Salazar, Guillem; Acinas, Silvia G.; Estrada, Marta; Massana, Ramó n; Logares, Ramiro; Giner, Caterina R.; Pernice, Massimo C.; Olivar, M. Pilar; Citores, Leire; Corell, Jon; Rodrí guez-Ezpeleta, Naiara; Acuñ a, José Luis; Molina-Ramí rez, Axayacatl; Gonzá lez-Gordillo, J. Ignacio; Có zar, André s; Martí , Elisa; Cuesta, José A.; Agusti, Susana; Fraile-Nuez, Eugenio; Duarte, Carlos M.; Irigoien, Xabier; Chust, Guillem

    2018-01-01

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  1. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto

    2018-01-04

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  2. The Design of a Fire Source in Scale-Model Experiments with Smoke Ventilation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Brohus, Henrik; la Cour-Harbo, H.

    2004-01-01

    The paper describes the design of a fire and a smoke source for scale-model experiments with smoke ventilation. It is only possible to work with scale-model experiments where the Reynolds number is reduced compared to full scale, and it is demonstrated that special attention to the fire source...... (heat and smoke source) may improve the possibility of obtaining Reynolds number independent solutions with a fully developed flow. The paper shows scale-model experiments for the Ofenegg tunnel case. Design of a fire source for experiments with smoke ventilation in a large room and smoke movement...

  3. Local-Scale Simulations of Nucleate Boiling on Micrometer Featured Surfaces: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Moreno, Gilberto [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dede, Ercan M. [Toyota Research Institute of North America; Joshi, Shailesh N. [Toyota Research Institute of North America; Zhou, Feng [Toyota Research Institute of North America

    2017-08-03

    A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulations pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.

  4. Modelling cloud effects on ozone on a regional scale : A case study

    NARCIS (Netherlands)

    Matthijsen, J.; Builtjes, P.J.H.; Meijer, E.W.; Boersen, G.

    1997-01-01

    We have investigated the influence of clouds on ozone on a regional scale (Europe) with a regional scale photochemical dispersion model (LOTOS). The LOTOS-model calculates ozone and other photo-oxidant concentrations in the lowest three km of the troposphere, using actual meteorologic data and

  5. Formation and fate of marine snow: small-scale processes with large- scale implications

    Directory of Open Access Journals (Sweden)

    Thomas Kiørboe

    2001-12-01

    Full Text Available Marine snow aggregates are believed to be the main vehicles for vertical material transport in the ocean. However, aggregates are also sites of elevated heterotrophic activity, which may rather cause enhanced retention of aggregated material in the upper ocean. Small-scale biological-physical interactions govern the formation and fate of marine snow. Aggregates may form by physical coagulation: fluid motion causes collisions between small primary particles (e.g. phytoplankton that may then stick together to form aggregates with enhanced sinking velocities. Bacteria may subsequently solubilise and remineralise aggregated particles. Because the solubilization rate exceeds the remineralization rate, organic solutes leak out of sinking aggregates. The leaking solutes spread by diffusion and advection and form a chemical trail in the wake of the sinking aggregate that may guide small zooplankters to the aggregate. Also, suspended bacteria may enjoy the elevated concentration of organic solutes in the plume. I explore these small-scale formation and degradation processes by means of models, experiments and field observations. The larger scale implications for the structure and functioning of pelagic food chains of export vs. retention of material will be discussed.

  6. Historical Carbon Dioxide Emissions Caused by Land-Use Changes are Possibly Larger than Assumed

    Science.gov (United States)

    Arneth, A.; Sitch, S.; Pongratz, J.; Stocker, B. D.; Ciais, P.; Poulter, B.; Bayer, A. D.; Bondeau, A.; Calle, L.; Chini, L. P.; hide

    2017-01-01

    The terrestrial biosphere absorbs about 20% of fossil-fuel CO2 emissions. The overall magnitude of this sink is constrained by the difference between emissions, the rate of increase in atmospheric CO2 concentrations, and the ocean sink. However, the land sink is actually composed of two largely counteracting fluxes that are poorly quantified: fluxes from land-use change andCO2 uptake by terrestrial ecosystems. Dynamic global vegetation model simulations suggest that CO2 emissions from land-use change have been substantially underestimated because processes such as tree harvesting and land clearing from shifting cultivation have not been considered. As the overall terrestrial sink is constrained, a larger net flux as a result of land-use change implies that terrestrial uptake of CO2 is also larger, and that terrestrial ecosystems might have greater potential to sequester carbon in the future. Consequently, reforestation projects and efforts to avoid further deforestation could represent important mitigation pathways, with co-benefits for biodiversity. It is unclear whether a larger land carbon sink can be reconciled with our current understanding of terrestrial carbon cycling. Our possible underestimation of the historical residual terrestrial carbon sink adds further uncertainty to our capacity to predict the future of terrestrial carbon uptake and losses.

  7. A Coupled GCM-Cloud Resolving Modeling System, and a Regional Scale Model to Study Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo

    2007-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a superparameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (2ICE, several 31CE), Goddard radiation (including explicitly calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generatio11 regional scale model, WRF. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).

  8. Design, construction, and evaluation of a 1:8 scale model binaural manikin.

    Science.gov (United States)

    Robinson, Philip; Xiang, Ning

    2013-03-01

    Many experiments in architectural acoustics require presenting listeners with simulations of different rooms to compare. Acoustic scale modeling is a feasible means to create accurate simulations of many rooms at reasonable cost. A critical component in a scale model room simulation is a receiver that properly emulates a human receiver. For this purpose, a scale model artificial head has been constructed and tested. This paper presents the design and construction methods used, proper equalization procedures, and measurements of its response. A headphone listening experiment examining sound externalization with various reflection conditions is presented that demonstrates its use for psycho-acoustic testing.

  9. Fluctuation Scaling, Calibration of Dispersion, and Detection of Differences.

    Science.gov (United States)

    Holland, Rianne; Rebmann, Roman; Williams, Craig; Hanley, Quentin S

    2017-11-07

    Fluctuation scaling describes the relationship between the mean and standard deviation of a set of measurements. An example is Horwitz scaling, which has been reported from interlaboratory studies. Horwitz and similar studies have reported simple exponential and segmented scaling laws with exponents (α) typically between 0.85 (Horwitz) and 1 when not operating near a detection limit. When approaching a detection limit, the exponents change and approach an apparently Gaussian (α = 0) model. This behavior is often presented as a property of interlaboratory studies, which makes controlled replication to understand the behavior costly to perform. To assess the contribution of instrumentation to larger scale fluctuation scaling, we measured the behavior of two inductively coupled plasma atomic emission spectrometry (ICP-AES) systems, in two laboratories measuring thulium using two emission lines. The standard deviation universally increased with the uncalibrated signal, indicating the system was heteroscedastic. The response from all lines and both instruments was consistent with a single exponential dispersion model having parameters α = 1.09 and β = 0.0035. No evidence of Horwitz scaling was found, and there was no evidence of Poisson noise limiting behavior. The "Gaussian" component was a consequence of background subtraction for all lines and both instruments. The observation of a simple exponential dispersion model in the data allows for the definition of a difference detection limit (DDL) with universal applicability to systems following known dispersion. The DDL is the minimum separation between two points along a dispersion model required to claim they are different according to a particular statistical test. The DDL scales transparently with the mean and works at any location in a response function.

  10. Modelling Protein Dynamics on the Microsecond Time Scale

    DEFF Research Database (Denmark)

    Siuda, Iwona Anna

    Recent years have shown an increase in coarse-grained (CG) molecular dynamics simulations, providing structural and dynamic details of large proteins and enabling studies of self-assembly of biological materials. It is not easy to acquire such data experimentally, and access is also still limited...... in atomistic simulations. During her PhD studies, Iwona Siuda used MARTINI CG models to study the dynamics of different globular and membrane proteins. In several cases, the MARTINI model was sufficient to study conformational changes of small, purely alpha-helical proteins. However, in studies of larger......ELNEDIN was therefore proposed as part of the work. Iwona Siuda’s results from the CG simulations had biological implications that provide insights into possible mechanisms of the periplasmic leucine-binding protein, the sarco(endo)plasmic reticulum calcium pump, and several proteins from the saposin-like proteins...

  11. Scaling of Core Material in Rubble Mound Breakwater Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Liu, Z.; Troch, P.

    1999-01-01

    The permeability of the core material influences armour stability, wave run-up and wave overtopping. The main problem related to the scaling of core materials in models is that the hydraulic gradient and the pore velocity are varying in space and time. This makes it impossible to arrive at a fully...... correct scaling. The paper presents an empirical formula for the estimation of the wave induced pressure gradient in the core, based on measurements in models and a prototype. The formula, together with the Forchheimer equation can be used for the estimation of pore velocities in cores. The paper proposes...... that the diameter of the core material in models is chosen in such a way that the Froude scale law holds for a characteristic pore velocity. The characteristic pore velocity is chosen as the average velocity of a most critical area in the core with respect to porous flow. Finally the method is demonstrated...

  12. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  13. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  14. Modelling of evapotranspiration at field and landscape scales. Abstract

    DEFF Research Database (Denmark)

    Overgaard, Jesper; Butts, M.B.; Rosbjerg, Dan

    2002-01-01

    observations from a nearby weather station. Detailed land-use and soil maps were used to set up the model. Leaf area index was derived from NDVI (Normalized Difference Vegetation Index) images. To validate the model at field scale the simulated evapotranspiration rates were compared to eddy...

  15. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  16. Aerosol numerical modelling at local scale

    International Nuclear Information System (INIS)

    Albriet, Bastien

    2007-01-01

    At local scale and in urban areas, an important part of particulate pollution is due to traffic. It contributes largely to the high number concentrations observed. Two aerosol sources are mainly linked to traffic. Primary emission of soot particles and secondary nanoparticle formation by nucleation. The emissions and mechanisms leading to the formation of such bimodal distribution are still badly understood nowadays. In this thesis, we try to provide an answer to this problematic by numerical modelling. The Modal Aerosol Model MAM is used, coupled with two 3D-codes: a CFD (Mercure Saturne) and a CTM (Polair3D). A sensitivity analysis is performed, at the border of a road but also in the first meters of an exhaust plume, to identify the role of each process involved and the sensitivity of different parameters used in the modelling. (author) [fr

  17. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    International Nuclear Information System (INIS)

    Geier, J.E.

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs

  18. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J.E. [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs.

  19. Scale Effects Related to Small Physical Modelling of Overtopping of Rubble Mound Breakwaters

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Andersen, Thomas Lykke

    2009-01-01

    By comparison of overtopping discharges recorded in prototype and small scale physical models it was demonstrated in the EU-CLASH project that small scale tests significantly underestimate smaller discharges. Deviations in overtopping are due to model and scale effects. These effects are discusse...... armour on the upper part of the slope. This effect is believed to be the main reason for the found deviations between overtopping in prototype and small scale tests....

  20. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  1. Modeling and Validation across Scales: Parametrizing the effect of the forested landscape

    DEFF Research Database (Denmark)

    Dellwik, Ebba; Badger, Merete; Angelou, Nikolas

    be transferred into a parametrization of forests in wind models. The presentation covers three scales: the single tree, the forest edges and clearings, and the large-scale forested landscape in which the forest effects are parameterized with a roughness length. Flow modeling results and validation against...

  2. Planetary-Scale Inertio Gravity Waves in the Numerical Spectral Model

    Science.gov (United States)

    Mayr, H. G.; Mengel, J. R.; Talaat, E. R.; Porter, H. S.

    2004-01-01

    In the polar region of the upper mesosphere, horizontal wind oscillations have been observed with periods around 10 hours. Waves with such a period are generated in our Numerical Spectral Model (NSM), and they are identified as planetary-scale inertio gravity waves (IGW). These IGWs have periods between 9 and 11 hours and appear above 60 km in the zonal mean (m = 0), as well as in zonal wavenumbers m = 1 to 4. The waves can propagate eastward and westward and have vertical wavelengths around 25 km. The amplitudes in the wind field are typically between 10 and 20 m/s and can reach 30 m/s in the westward propagating component for m = 1 at the poles. In the temperature perturbations, the wave amplitudes above 100 km are typically 5 K and as large as 10 K for m = 0 at the poles. The IGWs are intermittent but reveal systematic seasonal variations, with the largest amplitudes occurring generally in late winter and spring. In the NSM, the IGW are generated like the planetary waves (PW). They are produced apparently by the instabilities that arise in the zonal mean circulation. Relative to the PWs, however, the IGWs propagate zonally with much larger velocities, such that they are not affected much by interactions with the background zonal winds. Since the IGWs can propagate through the mesosphere without much interaction, except for viscous dissipation, one should then expect that they reach the thermosphere with significant and measurable amplitudes.

  3. Universe before Planck time: A quantum gravity model

    International Nuclear Information System (INIS)

    Padmanabhan, T.

    1983-01-01

    A model for quantum gravity can be constructed by treating the conformal degree of freedom of spacetime as a quantum variable. An isotropic, homogeneous cosmological solution in this quantum gravity model is presented. The spacetime is nonsingular for all the three possible values of three-space curvature, and agrees with the classical solution for time scales larger than the Planck time scale. A possibility of quantum fluctuations creating the matter in the universe is suggested

  4. Active Learning of Classification Models with Likert-Scale Feedback.

    Science.gov (United States)

    Xue, Yanbing; Hauskrecht, Milos

    2017-01-01

    Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.

  5. Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales

    International Nuclear Information System (INIS)

    Krstic, Predrag S.

    2014-01-01

    Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.

  6. A structural equation modelling of the academic self-concept scale

    Directory of Open Access Journals (Sweden)

    Musa Matovu

    2014-03-01

    Full Text Available The study aimed at validating the academic self-concept scale by Liu and Wang (2005 in measuring academic self-concept among university students. Structural equation modelling was used to validate the scale which was composed of two subscales; academic confidence and academic effort. The study was conducted on university students; males and females from different levels of study and faculties. In this study the influence of academic self-concept on academic achievement was assessed, tested whether the hypothesised model fitted the data, analysed the invariance of the path coefficients among the moderating variables, and also, highlighted whether academic confidence and academic effort measured academic selfconcept. The results from the model revealed that academic self-concept influenced academic achievement and the hypothesised model fitted the data. The results also supported the model as the causal structure was not sensitive to gender, levels of study, and faculties of students; hence, applicable to all the groups taken as moderating variables. It was also noted that academic confidence and academic effort are a measure of academic self-concept. According to the results the academic self-concept scale by Liu and Wang (2005 was deemed adequate in collecting information about academic self-concept among university students.

  7. Regionalization of meso-scale physically based nitrogen modeling outputs to the macro-scale by the use of regression trees

    Science.gov (United States)

    Künne, A.; Fink, M.; Kipka, H.; Krause, P.; Flügel, W.-A.

    2012-06-01

    In this paper, a method is presented to estimate excess nitrogen on large scales considering single field processes. The approach was implemented by using the physically based model J2000-S to simulate the nitrogen balance as well as the hydrological dynamics within meso-scale test catchments. The model input data, the parameterization, the results and a detailed system understanding were used to generate the regression tree models with GUIDE (Loh, 2002). For each landscape type in the federal state of Thuringia a regression tree was calibrated and validated using the model data and results of excess nitrogen from the test catchments. Hydrological parameters such as precipitation and evapotranspiration were also used to predict excess nitrogen by the regression tree model. Hence they had to be calculated and regionalized as well for the state of Thuringia. Here the model J2000g was used to simulate the water balance on the macro scale. With the regression trees the excess nitrogen was regionalized for each landscape type of Thuringia. The approach allows calculating the potential nitrogen input into the streams of the drainage area. The results show that the applied methodology was able to transfer the detailed model results of the meso-scale catchments to the entire state of Thuringia by low computing time without losing the detailed knowledge from the nitrogen transport modeling. This was validated with modeling results from Fink (2004) in a catchment lying in the regionalization area. The regionalized and modeled excess nitrogen correspond with 94%. The study was conducted within the framework of a project in collaboration with the Thuringian Environmental Ministry, whose overall aim was to assess the effect of agro-environmental measures regarding load reduction in the water bodies of Thuringia to fulfill the requirements of the European Water Framework Directive (Bäse et al., 2007; Fink, 2006; Fink et al., 2007).

  8. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    Unknown

    Multi-scale models; quasicontinuum method; finite elements. 1. Introduction ... boundary with external stresses, and the interaction of a lattice dislocation with a grain ..... mum value of se over the elements that touch node α. The acceleration of ...

  9. Magnetic Reconnection May Control the Ion-scale Spectral Break of Solar Wind Turbulence

    Science.gov (United States)

    Vech, Daniel; Mallet, Alfred; Klein, Kristopher G.; Kasper, Justin C.

    2018-03-01

    The power spectral density of magnetic fluctuations in the solar wind exhibits several power-law-like frequency ranges with a well-defined break between approximately 0.1 and 1 Hz in the spacecraft frame. The exact dependence of this break scale on solar wind parameters has been extensively studied but is not yet fully understood. Recent studies have suggested that reconnection may induce a break in the spectrum at a “disruption scale” {λ }{{D}}, which may be larger than the fundamental ion kinetic scales, producing an unusually steep spectrum just below the break. We present a statistical investigation of the dependence of the break scale on the proton gyroradius ρ i , ion inertial length d i , ion sound radius ρ s , proton–cyclotron resonance scale ρ c , and disruption scale {λ }{{D}} as a function of {β }\\perp i. We find that the steepest spectral indices of the dissipation range occur when β e is in the range of 0.1–1 and the break scale is only slightly larger than the ion sound scale (a situation occurring 41% of the time at 1 au), in qualitative agreement with the reconnection model. In this range, the break scale shows a remarkably good correlation with {λ }{{D}}. Our findings suggest that, at least at low β e , reconnection may play an important role in the development of the dissipation range turbulent cascade and cause unusually steep (steeper than ‑3) spectral indices.

  10. Device Scale Modeling of Solvent Absorption using MFIX-TFM

    Energy Technology Data Exchange (ETDEWEB)

    Carney, Janine E. [National Energy Technology Lab. (NETL), Albany, OR (United States); Finn, Justin R. [National Energy Technology Lab. (NETL), Albany, OR (United States); Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States)

    2016-10-01

    Recent climate change is largely attributed to greenhouse gases (e.g., carbon dioxide, methane) and fossil fuels account for a large majority of global CO2 emissions. That said, fossil fuels will continue to play a significant role in the generation of power for the foreseeable future. The extent to which CO2 is emitted needs to be reduced, however, carbon capture and sequestration are also necessary actions to tackle climate change. Different approaches exist for CO2 capture including both post-combustion and pre-combustion technologies, oxy-fuel combustion and/or chemical looping combustion. The focus of this effort is on post-combustion solvent-absorption technology. To apply CO2 technologies at commercial scale, the availability and maturity and the potential for scalability of that technology need to be considered. Solvent absorption is a proven technology but not at the scale needed by typical power plant. The scale up and down and design of laboratory and commercial packed bed reactors depends heavily on the specific knowledge of two-phase pressure drop, liquid holdup, the wetting efficiency and mass transfer efficiency as a function of operating conditions. Simple scaling rules often fail to provide proper design. Conventional reactor design modeling approaches will generally characterize complex non-ideal flow and mixing patterns using simplified and/or mechanistic flow assumptions. While there are varying levels of complexity used within these approaches, none of these models resolve the local velocity fields. Consequently, they are unable to account for important design factors such as flow maldistribution and channeling from a fundamental perspective. Ideally design would be aided by development of predictive models based on truer representation of the physical and chemical processes that occur at different scales. Computational fluid dynamic (CFD) models are based on multidimensional flow equations with first

  11. On the scale similarity in large eddy simulation. A proposal of a new model

    International Nuclear Information System (INIS)

    Pasero, E.; Cannata, G.; Gallerano, F.

    2004-01-01

    Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The

  12. Low-frequency scaling applied to stochastic finite-fault modeling

    Science.gov (United States)

    Crane, Stephen; Motazedian, Dariush

    2014-01-01

    Stochastic finite-fault modeling is an important tool for simulating moderate to large earthquakes. It has proven to be useful in applications that require a reliable estimation of ground motions, mostly in the spectral frequency range of 1 to 10 Hz, which is the range of most interest to engineers. However, since there can be little resemblance between the low-frequency spectra of large and small earthquakes, this portion can be difficult to simulate using stochastic finite-fault techniques. This paper introduces two different methods to scale low-frequency spectra for stochastic finite-fault modeling. One method multiplies the subfault source spectrum by an empirical function. This function has three parameters to scale the low-frequency spectra: the level of scaling and the start and end frequencies of the taper. This empirical function adjusts the earthquake spectra only between the desired frequencies, conserving seismic moment in the simulated spectra. The other method is an empirical low-frequency coefficient that is added to the subfault corner frequency. This new parameter changes the ratio between high and low frequencies. For each simulation, the entire earthquake spectra is adjusted, which may result in the seismic moment not being conserved for a simulated earthquake. These low-frequency scaling methods were used to reproduce recorded earthquake spectra from several earthquakes recorded in the Pacific Earthquake Engineering Research Center (PEER) Next Generation Attenuation Models (NGA) database. There were two methods of determining the stochastic parameters of best fit for each earthquake: a general residual analysis and an earthquake-specific residual analysis. Both methods resulted in comparable values for stress drop and the low-frequency scaling parameters; however, the earthquake-specific residual analysis obtained a more accurate distribution of the averaged residuals.

  13. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture the observed locality of interactions. Traditional self-propelled particle models fail to capture the fine scale dynamics of the system. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics, while maintaining a biologically plausible perceptual range. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  14. Updating of a dynamic finite element model from the Hualien scale model reactor building

    International Nuclear Information System (INIS)

    Billet, L.; Moine, P.; Lebailly, P.

    1996-08-01

    The forces occurring at the soil-structure interface of a building have generally a large influence on the way the building reacts to an earthquake. One can be tempted to characterise these forces more accurately bu updating a model from the structure. However, this procedure requires an updating method suitable for dissipative models, since significant damping can be observed at the soil-structure interface of buildings. Such a method is presented here. It is based on the minimization of a mechanical energy built from the difference between Eigen data calculated bu the model and Eigen data issued from experimental tests on the real structure. An experimental validation of this method is then proposed on a model from the HUALIEN scale-model reactor building. This scale-model, built on the HUALIEN site of TAIWAN, is devoted to the study of soil-structure interaction. The updating concerned the soil impedances, modelled by a layer of springs and viscous dampers attached to the building foundation. A good agreement was found between the Eigen modes and dynamic responses calculated bu the updated model and the corresponding experimental data. (authors). 12 refs., 3 figs., 4 tabs

  15. Light moduli in almost no-scale models

    International Nuclear Information System (INIS)

    Buchmueller, Wilfried; Moeller, Jan; Schmidt, Jonas

    2009-09-01

    We discuss the stabilization of the compact dimension for a class of five-dimensional orbifold supergravity models. Supersymmetry is broken by the superpotential on a boundary. Classically, the size L of the fifth dimension is undetermined, with or without supersymmetry breaking, and the effective potential is of no-scale type. The size L is fixed by quantum corrections to the Kaehler potential, the Casimir energy and Fayet-Iliopoulos (FI) terms localized at the boundaries. For an FI scale of order M GUT , as in heterotic string compactifications with anomalous U(1) symmetries, one obtains L∝1/M GUT . A small mass is predicted for the scalar fluctuation associated with the fifth dimension, m ρ 3/2 /(L M). (orig.)

  16. Groundwater flow analysis on local scale. Setting boundary conditions for groundwater flow analysis on site scale model in step 1

    International Nuclear Information System (INIS)

    Ohyama, Takuya; Saegusa, Hiromitsu; Onoe, Hironori

    2005-05-01

    Japan Nuclear Cycle Development Institute has been conducting a wide range of geoscientific research in order to build a foundation for multidisciplinary studies of the deep geological environment as a basis of research and development for geological disposal of nuclear wastes. Ongoing geoscientific research programs include the Regional Hydrogeological Study (RHS) project and Mizunami Underground Research Laboratory (MIU) project in the Tono region, Gifu Prefecture. The main goal of these projects is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment at several spatial scales. The RHS project is a local scale study for understanding the groundwater flow system from the recharge area to the discharge area. The surface-based Investigation Phase of the MIU project is a site scale study for understanding the groundwater flow system immediately surrounding the MIU construction site. The MIU project is being conducted using a multiphase, iterative approach. In this study, the hydrogeological modeling and groundwater flow analysis of the local scale were carried out in order to set boundary conditions of the site scale model based on the data obtained from surface-based investigations in Step 1 in site scale of the MIU project. As a result of the study, head distribution to set boundary conditions for groundwater flow analysis on the site scale model could be obtained. (author)

  17. Genome-scale modeling of yeast: chronology, applications and critical perspectives.

    Science.gov (United States)

    Lopes, Helder; Rocha, Isabel

    2017-08-01

    Over the last 15 years, several genome-scale metabolic models (GSMMs) were developed for different yeast species, aiding both the elucidation of new biological processes and the shift toward a bio-based economy, through the design of in silico inspired cell factories. Here, an historical perspective of the GSMMs built over time for several yeast species is presented and the main inheritance patterns among the metabolic reconstructions are highlighted. We additionally provide a critical perspective on the overall genome-scale modeling procedure, underlining incomplete model validation and evaluation approaches and the quest for the integration of regulatory and kinetic information into yeast GSMMs. A summary of experimentally validated model-based metabolic engineering applications of yeast species is further emphasized, while the main challenges and future perspectives for the field are finally addressed. © FEMS 2017.

  18. Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity

    Science.gov (United States)

    Luce, C. H.; Lute, A.

    2017-12-01

    Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations

  19. ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.

    Science.gov (United States)

    Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng

    2017-08-30

    While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.

  20. Training Systems Modelers through the Development of a Multi-scale Chagas Disease Risk Model

    Science.gov (United States)

    Hanley, J.; Stevens-Goodnight, S.; Kulkarni, S.; Bustamante, D.; Fytilis, N.; Goff, P.; Monroy, C.; Morrissey, L. A.; Orantes, L.; Stevens, L.; Dorn, P.; Lucero, D.; Rios, J.; Rizzo, D. M.

    2012-12-01

    The goal of our NSF-sponsored Division of Behavioral and Cognitive Sciences grant is to create a multidisciplinary approach to develop spatially explicit models of vector-borne disease risk using Chagas disease as our model. Chagas disease is a parasitic disease endemic to Latin America that afflicts an estimated 10 million people. The causative agent (Trypanosoma cruzi) is most commonly transmitted to humans by blood feeding triatomine insect vectors. Our objectives are: (1) advance knowledge on the multiple interacting factors affecting the transmission of Chagas disease, and (2) provide next generation genomic and spatial analysis tools applicable to the study of other vector-borne diseases worldwide. This funding is a collaborative effort between the RSENR (UVM), the School of Engineering (UVM), the Department of Biology (UVM), the Department of Biological Sciences (Loyola (New Orleans)) and the Laboratory of Applied Entomology and Parasitology (Universidad de San Carlos). Throughout this five-year study, multi-educational groups (i.e., high school, undergraduate, graduate, and postdoctoral) will be trained in systems modeling. This systems approach challenges students to incorporate environmental, social, and economic as well as technical aspects and enables modelers to simulate and visualize topics that would either be too expensive, complex or difficult to study directly (Yasar and Landau 2003). We launch this research by developing a set of multi-scale, epidemiological models of Chagas disease risk using STELLA® software v.9.1.3 (isee systems, inc., Lebanon, NH). We use this particular system dynamics software as a starting point because of its simple graphical user interface (e.g., behavior-over-time graphs, stock/flow diagrams, and causal loops). To date, high school and undergraduate students have created a set of multi-scale (i.e., homestead, village, and regional) disease models. Modeling the system at multiple spatial scales forces recognition that