WorldWideScience

Sample records for representative scaled-down modelling

  1. Design of scaled down structural models

    Science.gov (United States)

    Simitses, George J.

    1994-07-01

    In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.

  2. Performance prediction of industrial centrifuges using scale-down models.

    Science.gov (United States)

    Boychyn, M; Yim, S S S; Bulmer, M; More, J; Bracewell, D G; Hoare, M

    2004-12-01

    Computational fluid dynamics was used to model the high flow forces found in the feed zone of a multichamber-bowl centrifuge and reproduce these in a small, high-speed rotating disc device. Linking the device to scale-down centrifugation, permitted good estimation of the performance of various continuous-flow centrifuges (disc stack, multichamber bowl, CARR Powerfuge) for shear-sensitive protein precipitates. Critically, the ultra scale-down centrifugation process proved to be a much more accurate predictor of production multichamber-bowl performance than was the pilot centrifuge.

  3. Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity

    Science.gov (United States)

    Luce, C. H.; Lute, A.

    2017-12-01

    Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations

  4. An industrial perspective on bioreactor scale-down: what we can learn from combined large-scale bioprocess and model fluid studies.

    Science.gov (United States)

    Noorman, Henk

    2011-08-01

    For industrial bioreactor design, operation, control and optimization, the scale-down approach is often advocated to efficiently generate data on a small scale, and effectively apply suggested improvements to the industrial scale. In all cases it is important to ensure that the scale-down conditions are representative of the real large-scale bioprocess. Progress is hampered by limited detailed and local information from large-scale bioprocesses. Complementary to real fermentation studies, physical aspects of model fluids such as air-water in large bioreactors provide useful information with limited effort and cost. Still, in industrial practice, investments of time, capital and resources often prohibit systematic work, although, in the end, savings obtained in this way are trivial compared to the expenses that result from real process disturbances, batch failures, and non-flyers with loss of business opportunity. Here we try to highlight what can be learned from real large-scale bioprocess in combination with model fluid studies, and to provide suitable computation tools to overcome data restrictions. Focus is on a specific well-documented case for a 30-m(3) bioreactor. Areas for further research from an industrial perspective are also indicated. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. A simple analytical scaling method for a scaled-down test facility simulating SB-LOCAs in a passive PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il

    1992-02-01

    A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR

  6. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    Science.gov (United States)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone

  7. CFD aided analysis of a scaled down model of the Brazilian Multipurpose Reactor (RMB) pool

    International Nuclear Information System (INIS)

    Schweizer, Fernando L.A.; Lima, Claubia P.B.; Costa, Antonella L.; Veloso, Maria A.F.

    2013-01-01

    Research reactors are commonly built inside deep pools that provide radiological and thermal protection and easy access to its core. Reactors with thermal power in the order of MW usually use an auxiliary thermal-hydraulic circuit at the top of its pool to create a purified hot water layer (HWL). Thermal-hydraulic analysis of the flow configuration in the pool and HWL is paramount to insure radiological protection. A useful tool for these analyses is the application of CFD (Computational Fluid Dynamics). To obtain satisfactory results using CFD it is necessary the verification and validation of the CFD numerical model. Verification is divided in code and solution verifications. In the first one establishes the correctness of the CFD code implementation and in the former estimates the numerical accuracy of a particular calculation. Validation is performed through comparison of numerical and experimental results. This paper presents a dimensional analysis of the RMB (Brazilian Multipurpose Reactor) pool to determine a scaled down experimental installation able to aid in the HWL numerical investigation. Two CFD models were created one with the same dimensions and boundary conditions of the reactor prototype and the other with 1/10 proportion size and boundary conditions set to achieve the same inertial and buoyant forces proportions represented by Froude Number between the two models. Results comparing the HWL thickness show consistence between the prototype and the scaled down model behavior. (author)

  8. Performance Assessment of Turbulence Models for the Prediction of the Reactor Internal Flow in the Scale-down APR+

    International Nuclear Information System (INIS)

    Lee, Gonghee; Bang, Youngseok; Woo, Swengwoong; Kim, Dohyeong; Kang, Minku

    2013-01-01

    The types of errors in CFD simulation can be divided into the two main categories: numerical errors and model errors. Turbulence model is one of the important sources for model errors. In this study, in order to assess the prediction performance of Reynolds-averaged Navier-Stokes (RANS)-based two equations turbulence models for the analysis of flow distribution inside a 1/5 scale-down APR+, the simulation was conducted with the commercial CFD software, ANSYS CFX V. 14. In this study, in order to assess the prediction performance of turbulence models for the analysis of flow distribution inside a 1/5 scale-down APR+, the simulation was conducted with the commercial CFD software, ANSYS CFX V. 14. Both standard k-ε model and SST model predicted the similar flow pattern inside reactor. Therefore it was concluded that the prediction performance of both turbulence models was nearly same. Complex thermal-hydraulic characteristics exist inside reactor because the reactor internals consist of fuel assembly, control rod assembly, and the internal structures. Either flow distribution test for the scale-down reactor model or computational fluid dynamics (CFD) simulation have been conducted to understand these complex thermal-hydraulic features inside reactor

  9. Representing macropore flow at the catchment scale: a comparative modeling study

    Science.gov (United States)

    Liu, D.; Li, H. Y.; Tian, F.; Leung, L. R.

    2017-12-01

    Macropore flow is an important hydrological process that generally enhances the soil infiltration capacity and velocity of subsurface water. Up till now, macropore flow is mostly simulated with high-resolution models. One possible drawback of this modeling approach is the difficulty to effectively represent the overall typology and connectivity of the macropore networks. We hypothesize that modeling macropore flow directly at the catchment scale may be complementary to the existing modeling strategy and offer some new insights. Tsinghua Representative Elementary Watershed model (THREW model) is a semi-distributed hydrology model, where the fundamental building blocks are representative elementary watersheds (REW) linked by the river channel network. In THREW, all the hydrological processes are described with constitutive relationships established directly at the REW level, i.e., catchment scale. In this study, the constitutive relationship of macropore flow drainage is established as part of THREW. The enhanced THREW model is then applied at two catchments with deep soils but distinct climates, the humid Asu catchment in the Amazon River basin, and the arid Wei catchment in the Yellow River basin. The Asu catchment has an area of 12.43km2 with mean annual precipitation of 2442mm. The larger Wei catchment has an area of 24800km2 but with mean annual precipitation of only 512mm. The rainfall-runoff processes are simulated at a hourly time step from 2002 to 2005 in the Asu catchment and from 2001 to 2012 in the Wei catchment. The role of macropore flow on the catchment hydrology will be analyzed comparatively over the Asu and Wei catchments against the observed streamflow, evapotranspiration and other auxiliary data.

  10. Scaling down

    Directory of Open Access Journals (Sweden)

    Ronald L Breiger

    2015-11-01

    Full Text Available While “scaling up” is a lively topic in network science and Big Data analysis today, my purpose in this essay is to articulate an alternative problem, that of “scaling down,” which I believe will also require increased attention in coming years. “Scaling down” is the problem of how macro-level features of Big Data affect, shape, and evoke lower-level features and processes. I identify four aspects of this problem: the extent to which findings from studies of Facebook and other Big-Data platforms apply to human behavior at the scale of church suppers and department politics where we spend much of our lives; the extent to which the mathematics of scaling might be consistent with behavioral principles, moving beyond a “universal” theory of networks to the study of variation within and between networks; and how a large social field, including its history and culture, shapes the typical representations, interactions, and strategies at local levels in a text or social network.

  11. Prelude to rational scale-up of penicillin production: a scale-down study.

    Science.gov (United States)

    Wang, Guan; Chu, Ju; Noorman, Henk; Xia, Jianye; Tang, Wenjun; Zhuang, Yingping; Zhang, Siliang

    2014-03-01

    Penicillin is one of the best known pharmaceuticals and is also an important member of the β-lactam antibiotics. Over the years, ambitious yields, titers, productivities, and low costs in the production of the β-lactam antibiotics have been stepwise realized through successive rounds of strain improvement and process optimization. Penicillium chrysogenum was proven to be an ideal cell factory for the production of penicillin, and successful approaches were exploited to elevate the production titer. However, the industrial production of penicillin faces the serious challenge that environmental gradients, which are caused by insufficient mixing and mass transfer limitations, exert a considerably negative impact on the ultimate productivity and yield. Scale-down studies regarding diverse environmental gradients have been carried out on bacteria, yeasts, and filamentous fungi as well as animal cells. In accordance, a variety of scale-down devices combined with fast sampling and quenching protocols have been established to acquire the true snapshots of the perturbed cellular conditions. The perturbed metabolome information stemming from scale-down studies contributed to the comprehension of the production process and the identification of improvement approaches. However, little is known about the influence of the flow field and the mechanisms of intracellular metabolism. Consequently, it is still rather difficult to realize a fully rational scale-up. In the future, developing a computer framework to simulate the flow field of the large-scale fermenters is highly recommended. Furthermore, a metabolically structured kinetic model directly related to the production of penicillin will be further coupled to the fluid flow dynamics. A mathematical model including the information from both computational fluid dynamics and chemical reaction dynamics will then be established for the prediction of detailed information over the entire period of the fermentation process and

  12. Application of high-throughput mini-bioreactor system for systematic scale-down modeling, process characterization, and control strategy development.

    Science.gov (United States)

    Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2015-01-01

    High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.

  13. Climatic and physiographic controls on catchment-scale nitrate loss at different spatial scales: insights from a top-down model development approach

    Science.gov (United States)

    Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe

    2017-04-01

    Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.

  14. Bioclim Deliverable D8b: development of the physical/statistical down-scaling methodology and application to climate model Climber for BIOCLIM Work-package 3

    International Nuclear Information System (INIS)

    2003-01-01

    The overall aim of BIOCLIM is to assess the possible long term impacts due to climate change on the safety of radioactive waste repositories in deep formations. The main aim of this deliverable is to provide time series of climatic variables at the high resolution as needed by performance assessment (PA) of radioactive waste repositories, on the basis of coarse output from the CLIMBER-GREMLINS climate model. The climatological variables studied here are long-term (monthly) mean temperature and precipitation, as these are the main variables of interest for performance assessment. CLIMBER-GREMLINS is an earth-system model of intermediate complexity (EMIC), designed for long climate simulations (glacial cycles). Thus, this model has a coarse resolution (about 50 degrees in longitude) and other limitations which are sketched in this report. For the purpose of performance assessment, the climatological variables are required at scales pertinent for the knowledge of the conditions at the depository site. In this work, the final resolution is that of the best available global gridded present-day climatology, which is 1/6 degree in both longitude and latitude. To obtain climate-change information at this high resolution on the basis of the climate model outputs, a 2-step down-scaling method is designed. First, physical considerations are used to define variables which are expected to have links which climatological values; secondly a statistical model is used to find the links between these variables and the high-resolution climatology of temperature and precipitation. Thus the method is termed as 'physical/statistical': it involves physically based assumptions to compute predictors from model variables and then relies on statistics to find empirical links between these predictors and the climatology. The simple connection of coarse model results to regional values can not be done on a purely empirical way because the model does not provide enough information - it is both

  15. Bioclim Deliverable D6b: application of statistical down-scaling within the BIOCLIM hierarchical strategy: methods, data requirements and underlying assumptions

    International Nuclear Information System (INIS)

    2004-01-01

    The overall aim of BIOCLIM is to assess the possible long term impacts due to climate change on the safety of radioactive waste repositories in deep formations. The coarse spatial scale of the Earth-system Models of Intermediate Complexity (EMICs) used in BIOCLIM compared with the BIOCLIM study regions and the needs of performance assessment creates a need for down-scaling. Most of the developmental work on down-scaling methodologies undertaken by the international research community has focused on down-scaling from the general circulation model (GCM) scale (with a typical spatial resolution of 400 km by 400 km over Europe in the current generation of models) using dynamical down-scaling (i.e., regional climate models (RCMs), which typically have a spatial resolution of 50 km by 50 km for models whose domain covers the European region) or statistical methods (which can provide information at the point or station scale) in order to construct scenarios of anthropogenic climate change up to 2100. Dynamical down-scaling (with the MAR RCM) is used in BIOCLIM WP2 to down-scale from the GCM (i.e., IPSL C M4 D ) scale. In the original BIOCLIM description of work, it was proposed that UEA would apply statistical down-scaling to IPSL C M4 D output in WP2 as part of the hierarchical strategy. Statistical down-scaling requires the identification of statistical relationships between the observed large-scale and regional/local climate, which are then applied to large-scale GCM output, on the assumption that these relationships remain valid in the future (the assumption of stationarity). Thus it was proposed that UEA would investigate the extent to which it is possible to apply relationships between the present-day large-scale and regional/local climate to the relatively extreme conditions of the BIOCLIM WP2 snapshot simulations. Potential statistical down-scaling methodologies were identified from previous work performed at UEA. Appropriate station data from the case

  16. Top-down models in biology: explanation and control of complex living systems above the molecular level.

    Science.gov (United States)

    Pezzulo, Giovanni; Levin, Michael

    2016-11-01

    It is widely assumed in developmental biology and bioengineering that optimal understanding and control of complex living systems follows from models of molecular events. The success of reductionism has overshadowed attempts at top-down models and control policies in biological systems. However, other fields, including physics, engineering and neuroscience, have successfully used the explanations and models at higher levels of organization, including least-action principles in physics and control-theoretic models in computational neuroscience. Exploiting the dynamic regulation of pattern formation in embryogenesis and regeneration requires new approaches to understand how cells cooperate towards large-scale anatomical goal states. Here, we argue that top-down models of pattern homeostasis serve as proof of principle for extending the current paradigm beyond emergence and molecule-level rules. We define top-down control in a biological context, discuss the examples of how cognitive neuroscience and physics exploit these strategies, and illustrate areas in which they may offer significant advantages as complements to the mainstream paradigm. By targeting system controls at multiple levels of organization and demystifying goal-directed (cybernetic) processes, top-down strategies represent a roadmap for using the deep insights of other fields for transformative advances in regenerative medicine and systems bioengineering. © 2016 The Author(s).

  17. A scaling study of the natural circulation flow of the ex-vessel core catcher cooling system of a 1400MW PWR for designing a scale-down test facility

    International Nuclear Information System (INIS)

    Rhee, Bo. W.; Ha, K. S.; Park, R. J.; Song, J. H.

    2012-01-01

    A scaling study on the steady state natural circulation flow along the flow path of the ex-vessel core catcher cooling system of 1400MWe PWR is described. The scaling criteria for reproducing the same thermalhydraulic characteristics of the natural circulation flow as the prototype core catcher cooling system in the scale-down test facility is derived and the resulting natural circulation flow characteristics of the prototype and scale-down facility analyzed and compared. The purpose of this study is to apply the similarity law to the prototype EU-APR1400 core catcher cooling system and the model test facility of this prototype system and derive a relationship between the heating channel characteristics and the down-comer piping characteristics so as to determine the down-comer pipe size and the orifice size of the model test facility. As the geometry and the heating wall heat flux of the heating channel of the model test facility will be the same as those of the prototype core catcher cooling system except the width of the heating channel is reduced, the axial distribution of the coolant quality (or void fraction) is expected to resemble each other between the prototype and model facility. Thus using this fact, the down-comer piping design characteristics of the model facility can be determined from the relationship derived from the similarity law

  18. Enzyme-Gelatin Electrochemical Biosensors: Scaling Down

    Directory of Open Access Journals (Sweden)

    Hendrik A. Heering

    2012-03-01

    Full Text Available In this article we investigate the possibility of scaling down enzyme-gelatin modified electrodes by spin coating the enzyme-gelatin layer. Special attention is given to the electrochemical behavior of the selected enzymes inside the gelatin matrix. A glassy carbon electrode was used as a substrate to immobilize, in the first instance, horse heart cytochrome c (HHC in a gelatin matrix. Both a drop dried and a spin coated layer was prepared. On scaling down, a transition from diffusion controlled reactions towards adsorption controlled reactions is observed. Compared to a drop dried electrode, a spin coated electrode showed a more stable electrochemical behavior. Next to HHC, we also incorporated catalase in a spin coated gelatin matrix immobilized on a glassy carbon electrode. By spincoating, highly uniform sub micrometer layers of biocompatible matrices can be constructed. A full electrochemical study and characterization of the modified surfaces has been carried out. It was clear that in the case of catalase, gluteraldehyde addition was needed to prevent leaking of the catalase from the gelatin matrix.

  19. Atypical biological motion kinematics are represented by complementary lower-level and top-down processes during imitation learning.

    Science.gov (United States)

    Hayes, Spencer J; Dutoy, Chris A; Elliott, Digby; Gowen, Emma; Bennett, Simon J

    2016-01-01

    Learning a novel movement requires a new set of kinematics to be represented by the sensorimotor system. This is often accomplished through imitation learning where lower-level sensorimotor processes are suggested to represent the biological motion kinematics associated with an observed movement. Top-down factors have the potential to influence this process based on the social context, attention and salience, and the goal of the movement. In order to further examine the potential interaction between lower-level and top-down processes in imitation learning, the aim of this study was to systematically control the mediating effects during an imitation of biological motion protocol. In this protocol, we used non-human agent models that displayed different novel atypical biological motion kinematics, as well as a control model that displayed constant velocity. Importantly the three models had the same movement amplitude and movement time. Also, the motion kinematics were displayed in the presence, or absence, of end-state-targets. Kinematic analyses showed atypical biological motion kinematics were imitated, and that this performance was different from the constant velocity control condition. Although the imitation of atypical biological motion kinematics was not modulated by the end-state-targets, movement time was more accurate in the absence, compared to the presence, of an end-state-target. The fact that end-state targets modulated movement time accuracy, but not biological motion kinematics, indicates imitation learning involves top-down attentional, and lower-level sensorimotor systems, which operate as complementary processes mediated by the environmental context. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Representative Sinusoids for Hepatic Four-Scale Pharmacokinetics Simulations.

    Directory of Open Access Journals (Sweden)

    Lars Ole Schwen

    Full Text Available The mammalian liver plays a key role for metabolism and detoxification of xenobiotics in the body. The corresponding biochemical processes are typically subject to spatial variations at different length scales. Zonal enzyme expression along sinusoids leads to zonated metabolization already in the healthy state. Pathological states of the liver may involve liver cells affected in a zonated manner or heterogeneously across the whole organ. This spatial heterogeneity, however, cannot be described by most computational models which usually consider the liver as a homogeneous, well-stirred organ. The goal of this article is to present a methodology to extend whole-body pharmacokinetics models by a detailed liver model, combining different modeling approaches from the literature. This approach results in an integrated four-scale model, from single cells via sinusoids and the organ to the whole organism, capable of mechanistically representing metabolization inhomogeneity in livers at different spatial scales. Moreover, the model shows circulatory mixing effects due to a delayed recirculation through the surrounding organism. To show that this approach is generally applicable for different physiological processes, we show three applications as proofs of concept, covering a range of species, compounds, and diseased states: clearance of midazolam in steatotic human livers, clearance of caffeine in mouse livers regenerating from necrosis, and a parameter study on the impact of different cell entities on insulin uptake in mouse livers. The examples illustrate how variations only discernible at the local scale influence substance distribution in the plasma at the whole-body level. In particular, our results show that simultaneously considering variations at all relevant spatial scales may be necessary to understand their impact on observations at the organism scale.

  1. The impact of pH inhomogeneities on CHO cell physiology and fed-batch process performance - two-compartment scale-down modelling and intracellular pH excursion.

    Science.gov (United States)

    Brunner, Matthias; Braun, Philipp; Doppler, Philipp; Posch, Christoph; Behrens, Dirk; Herwig, Christoph; Fricke, Jens

    2017-07-01

    Due to high mixing times and base addition from top of the vessel, pH inhomogeneities are most likely to occur during large-scale mammalian processes. The goal of this study was to set-up a scale-down model of a 10-12 m 3 stirred tank bioreactor and to investigate the effect of pH perturbations on CHO cell physiology and process performance. Short-term changes in extracellular pH are hypothesized to affect intracellular pH and thus cell physiology. Therefore, batch fermentations, including pH shifts to 9.0 and 7.8, in regular one-compartment systems are conducted. The short-term adaption of the cells intracellular pH are showed an immediate increase due to elevated extracellular pH. With this basis of fundamental knowledge, a two-compartment system is established which is capable of simulating defined pH inhomogeneities. In contrast to state-of-the-art literature, the scale-down model is included parameters (e.g. volume of the inhomogeneous zone) as they might occur during large-scale processes. pH inhomogeneity studies in the two-compartment system are performed with simulation of temporary pH zones of pH 9.0. The specific growth rate especially during the exponential growth phase is strongly affected resulting in a decreased maximum viable cell density and final product titer. The gathered results indicate that even short-term exposure of cells to elevated pH values during large-scale processes can affect cell physiology and overall process performance. In particular, it could be shown for the first time that pH perturbations, which might occur during the early process phase, have to be considered in scale-down models of mammalian processes. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Physically representative atomistic modeling of atomic-scale friction

    Science.gov (United States)

    Dong, Yalin

    Nanotribology is a research field to study friction, adhesion, wear and lubrication occurred between two sliding interfaces at nano scale. This study is motivated by the demanding need of miniaturization mechanical components in Micro Electro Mechanical Systems (MEMS), improvement of durability in magnetic storage system, and other industrial applications. Overcoming tribological failure and finding ways to control friction at small scale have become keys to commercialize MEMS with sliding components as well as to stimulate the technological innovation associated with the development of MEMS. In addition to the industrial applications, such research is also scientifically fascinating because it opens a door to understand macroscopic friction from the most bottom atomic level, and therefore serves as a bridge between science and engineering. This thesis focuses on solid/solid atomic friction and its associated energy dissipation through theoretical analysis, atomistic simulation, transition state theory, and close collaboration with experimentalists. Reduced-order models have many advantages for its simplification and capacity to simulating long-time event. We will apply Prandtl-Tomlinson models and their extensions to interpret dry atomic-scale friction. We begin with the fundamental equations and build on them step-by-step from the simple quasistatic one-spring, one-mass model for predicting transitions between friction regimes to the two-dimensional and multi-atom models for describing the effect of contact area. Theoretical analysis, numerical implementation, and predicted physical phenomena are all discussed. In the process, we demonstrate the significant potential for this approach to yield new fundamental understanding of atomic-scale friction. Atomistic modeling can never be overemphasized in the investigation of atomic friction, in which each single atom could play a significant role, but is hard to be captured experimentally. In atomic friction, the

  3. Investigation of representing hysteresis in macroscopic models of two-phase flow in porous media using intermediate scale experimental data

    Science.gov (United States)

    Cihan, Abdullah; Birkholzer, Jens; Trevisan, Luca; Gonzalez-Nicolas, Ana; Illangasekare, Tissa

    2017-01-01

    Incorporating hysteresis into models is important to accurately capture the two phase flow behavior when porous media systems undergo cycles of drainage and imbibition such as in the cases of injection and post-injection redistribution of CO2 during geological CO2 storage (GCS). In the traditional model of two-phase flow, existing constitutive models that parameterize the hysteresis associated with these processes are generally based on the empirical relationships. This manuscript presents development and testing of mathematical hysteretic capillary pressure—saturation—relative permeability models with the objective of more accurately representing the redistribution of the fluids after injection. The constitutive models are developed by relating macroscopic variables to basic physics of two-phase capillary displacements at pore-scale and void space distribution properties. The modeling approach with the developed constitutive models with and without hysteresis as input is tested against some intermediate-scale flow cell experiments to test the ability of the models to represent movement and capillary trapping of immiscible fluids under macroscopically homogeneous and heterogeneous conditions. The hysteretic two-phase flow model predicted the overall plume migration and distribution during and post injection reasonably well and represented the postinjection behavior of the plume more accurately than the nonhysteretic models. Based on the results in this study, neglecting hysteresis in the constitutive models of the traditional two-phase flow theory can seriously overpredict or underpredict the injected fluid distribution during post-injection under both homogeneous and heterogeneous conditions, depending on the selected value of the residual saturation in the nonhysteretic models.

  4. Prediction of Hydraulic Performance of a Scaled-Down Model of SMART Reactor Coolant Pump

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Sun Guk; Park, Jin Seok; Yu, Je Yong; Lee, Won Jae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-08-15

    An analysis was conducted to predict the hydraulic performance of a reactor coolant pump (RCP) of SMART at the off-design as well as design points. In order to reduce the analysis time efficiently, a single passage containing an impeller and a diffuser was considered as the computational domain. A stage scheme was used to perform a circumferential averaging of the flux on the impeller-diffuser interface. The pressure difference between the inlet and outlet of the pump was determined and was used to compute the head, efficiency, and break horse power (BHP) of a scaled-down model under conditions of steady-state incompressible flow. The predicted curves of the hydraulic performance of an RCP were similar to the typical characteristic curves of a conventional mixed-flow pump. The complex internal fluid flow of a pump, including the internal recirculation loss due to reverse flow, was observed at a low flow rate.

  5. How Are Feedbacks Represented in Land Models?

    Directory of Open Access Journals (Sweden)

    Yang Chen

    2016-09-01

    Full Text Available Land systems are characterised by many feedbacks that can result in complex system behaviour. We defined feedbacks as the two-way influences between the land use system and a related system (e.g., climate, soils and markets, both of which are encompassed by the land system. Land models that include feedbacks thus probably more accurately mimic how land systems respond to, e.g., policy or climate change. However, representing feedbacks in land models is a challenge. We reviewed articles incorporating feedbacks into land models and analysed each with predefined indicators. We found that (1 most modelled feedbacks couple land use systems with transport, soil and market systems, while only a few include feedbacks between land use and social systems or climate systems; (2 equation-based land use models that follow a top-down approach prevail; and (3 feedbacks’ effects on system behaviour remain relatively unexplored. We recommend that land system modellers (1 consider feedbacks between land use systems and social systems; (2 adopt (bottom-up approaches suited to incorporating spatial heterogeneity and better representing land use decision-making; and (3 pay more attention to nonlinear system behaviour and its implications for land system management and policy.

  6. A low-jitter RF PLL frequency synthesizer with high-speed mixed-signal down-scaling circuits

    International Nuclear Information System (INIS)

    Tang Lu; Wang Zhigong; Xue Hong; He Xiaohu; Xu Yong; Sun Ling

    2010-01-01

    A low-jitter RF phase locked loop (PLL) frequency synthesizer with high-speed mixed-signal down-scaling circuits is proposed. Several techniques are proposed to reduce the design complexity and improve the performance of the mixed-signal down-scaling circuit in the PLL. An improved D-latch is proposed to increase the speed and the driving capability of the DMP in the down-scaling circuit. Through integrating the D-latch with 'OR' logic for dual-modulus operation, the delays associated with both the 'OR' and D-flip-flop (DFF) operations are reduced, and the complexity of the circuit is also decreased. The programmable frequency divider of the down-scaling circuit is realized in a new method based on deep submicron CMOS technology standard cells and a more accurate wire-load model. The charge pump in the PLL is also realized with a novel architecture to improve the current matching characteristic so as to reduce the jitter of the system. The proposed RF PLL frequency synthesizer is realized with a TSMC 0.18-μm CMOS process. The measured phase noise of the PLL frequency synthesizer output at 100 kHz offset from the center frequency is only -101.52 dBc/Hz. The circuit exhibits a low RMS jitter of 3.3 ps. The power consumption of the PLL frequency synthesizer is also as low as 36 mW at a 1.8 V power supply. (semiconductor integrated circuits)

  7. Stored energy analysis in the scaled-down test facilities

    International Nuclear Information System (INIS)

    Deng, Chengcheng; Chang, Huajian; Qin, Benke; Wu, Qiao

    2016-01-01

    Highlights: • Three methods are developed to evaluate stored energy in the scaled-down test facilities. • The mechanism behind stored energy distortion in the test facilities is revealed. • The application of stored energy analysis is demonstrated for the ACME facility of China. - Abstract: In the scaled-down test facilities that simulate the accident transient process of the prototype nuclear power plant, the stored energy release in the metal structures has an important influence on the accuracy and effectiveness of the experimental data. Three methods of stored energy analysis are developed, and the mechanism behind stored energy distortion in the test facilities is revealed. Moreover, the application of stored energy analysis is demonstrated for the ACME test facility newly built in China. The results show that the similarity requirements of three methods analyzing the stored energy release decrease gradually. The physical mechanism of stored energy release process can be characterized by the dimensionless numbers including Stanton number, Fourier number and Biot number. Under the premise of satisfying the overall similarity of natural circulation, the stored energy release process in the scale-down test facilities cannot maintain exact similarity. The results of the application of stored energy analysis illustrate that both the transient release process and integral total stored energy of the reactor pressure vessel wall of CAP1400 power plant can be well reproduced in the ACME test facility.

  8. Bioclim deliverable D8a: development of the rule-based down-scaling methodology for BIOCLIM Work-package 3

    International Nuclear Information System (INIS)

    2003-01-01

    The BIOCLIM project on modelling sequential Biosphere systems under Climate change for radioactive waste disposal is part of the EURATOM fifth European framework programme. The project was launched in October 2000 for a three-year period. The project aims at providing a scientific basis and practical methodology for assessing the possible long term impacts on the safety of radioactive waste repositories in deep formations due to climate and environmental change. Five work packages (WP) have been identified to fulfill the project objectives. One of the tasks of BIOCLIM WP3 was to develop a rule-based approach for down-scaling from the MoBidiC model of intermediate complexity in order to provide consistent estimates of monthly temperature and precipitation for the specific regions of interest to BIOCLIM (Central Spain, Central England and Northeast France, together with Germany and the Czech Republic). A statistical down-scaling methodology has been developed by Philippe Marbaix of CEA/LSCE for use with the second climate model of intermediate complexity used in BIOCLIM - CLIMBER-GREMLINS. The rule-based methodology assigns climate states or classes to a point on the time continuum of a region according to a combination of simple threshold values which can be determined from the coarse scale climate model. Once climate states or classes have been defined, monthly temperature and precipitation climatologies are constructed using analogue stations identified from a data base of present-day climate observations. The most appropriate climate classification for BIOCLIM purposes is the Koeppen/Trewartha scheme. This scheme has the advantage of being empirical, but only requires monthly averages of temperature and precipitation as input variables. Section 2 of this deliverable (D8a) outline how each of the eight methodological steps have been undertaken for each of the three main BIOCLIM study regions (Central England, Northeast France and Central Spain) using Mo

  9. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  10. Similitude and scaling of large structural elements: Case study

    Directory of Open Access Journals (Sweden)

    M. Shehadeh

    2015-06-01

    Full Text Available Scaled down models are widely used for experimental investigations of large structures due to the limitation in the capacities of testing facilities along with the expenses of the experimentation. The modeling accuracy depends upon the model material properties, fabrication accuracy and loading techniques. In the present work the Buckingham π theorem is used to develop the relations (i.e. geometry, loading and properties between the model and a large structural element as that is present in the huge existing petroleum oil drilling rigs. The model is to be designed, loaded and treated according to a set of similitude requirements that relate the model to the large structural element. Three independent scale factors which represent three fundamental dimensions, namely mass, length and time need to be selected for designing the scaled down model. Numerical prediction of the stress distribution within the model and its elastic deformation under steady loading is to be made. The results are compared with those obtained from the full scale structure numerical computations. The effect of scaled down model size and material on the accuracy of the modeling technique is thoroughly examined.

  11. Comparison of two down-scaling methods for climate study and climate change on the mountain areas in France

    International Nuclear Information System (INIS)

    Piazza, Marie; Page, Christian; Sanchez-Gomez, Emilia; Terray, Laurent; Deque, Michel

    2013-01-01

    Mountain regions are highly vulnerable to climate change and are likely to be among the areas most impacted by global warming. But climate projections for the end of the 21. century are developed with general circulation models of climate, which do not present a sufficient horizontal resolution to accurately evaluate the impacts of warming on these regions. Several techniques are then used to perform a spatial down-scaling (on the order of 10 km). There are two categories of down-scaling methods: dynamical methods that require significant computational resources for the achievement of regional climate simulations at high resolution, and statistical methods that require few resources but an observation dataset over a long period and of good quality. In this study, climate simulations of the global atmospheric model ARPEGE projections over France are down-scaled according to a dynamical method, performed with the ALADIN-Climate regional model, and a statistical method performed with the software DSClim developed at CERFACS. The two down-scaling methods are presented and the results on the climate of the French mountains are evaluated for the current climate. Both methods give similar results for average snowfall. However extreme events of total precipitation (droughts, intense precipitation events) are largely underestimated by the statistical method. Then, the results of both methods are compared for two future climate projections, according to the greenhouse gas emissions scenario A1B of IPCC. The two methods agree on fewer frost days, a significant decrease in the amounts of solid precipitation and an average increase in the percentage of dry days of more than 10%. The results obtained on Corsica are more heterogeneous but they are questionable because the reduced spatial domain is probably not very relevant regarding statistical sampling. (authors)

  12. Bioclim Deliverable D6a: regional climatic characteristics for the European sites at specific times: the dynamical down-scaling

    International Nuclear Information System (INIS)

    2003-01-01

    The overall aim of BIOCLIM is to assess the possible long-term impacts due to climate change on the safety of radioactive waste repositories in deep formations. This aim is addressed through the following specific objectives: - Development of practical and innovative strategies for representing sequential climatic changes to the geosphere-biosphere system for existing sites over central Europe, addressing the timescale of one million years, which is relevant to the geological disposal of radioactive waste. - Exploration and evaluation of the potential effects of climate change on the nature of the biosphere systems used to assess the environmental impact. - Dissemination of information on the new methodologies and the results obtained from the project among the international waste management community for use in performance assessments of potential or planned radioactive waste repositories. The BIOCLIM project is designed to advance the state-of-the-art of biosphere modelling for use in Performance Assessments. Therefore, two strategies are developed for representing sequential climatic changes to geosphere-biosphere systems. The hierarchical strategy successively uses a hierarchy of climate models. These models vary from simple 2-D models, which simulate interactions between a few aspects of the Earth system at a rough surface resolution, through General Circulation Model (GCM) and vegetation model, which simulate in great detail the dynamics and physics of the atmosphere, ocean and biosphere, to regional models, which focus on the European regions and sites of interest. Moreover, rule-based and statistical down-scaling procedures are also considered. Comparisons are provided in terms of climate and vegetation cover at the selected times and for the study regions. The integrated strategy consists of using integrated climate models, representing all the physical mechanisms important for long-term continuous climate variations, to simulate the climate evolution over

  13. Stored energy analysis in scale-down test facility

    International Nuclear Information System (INIS)

    Deng Chengcheng; Qin Benke; Fang Fangfang; Chang Huajian; Ye Zishen

    2013-01-01

    In the integral test facilities that simulate the accident transient process of the prototype nuclear power plant, the stored energy in the metal components has a direct influence on the simulation range and the test results of the facilities. Based on the heat transfer theory, three methods analyzing the stored energy were developed, and a thorough study on the stored energy problem in the scale-down test facilities was further carried out. The lumped parameter method and power integration method were applied to analyze the transient process of energy releasing and to evaluate the average total energy stored in the reactor pressure vessel of the ACME (advanced core-cooling mechanism experiment) facility, which is now being built in China. The results show that the similarity requirements for such three methods to analyze the stored energy in the test facilities are reduced gradually. Under the condition of satisfying the integral similarity of natural circulation, the stored energy releasing process in the scale-down test facilities can't maintain exact similarity. The stored energy in the reactor pressure vessel wall of ACME, which is released quickly during the early stage of rapid depressurization of system, will not make a major impact on the long-term behavior of system. And the scaling distortion of integral average total energy of the stored heat is acceptable. (authors)

  14. Computational fluid dynamics analysis of the initial stages of a VHTR air-ingress accident using a scaled-down model

    Energy Technology Data Exchange (ETDEWEB)

    Ham, Tae K., E-mail: taekyu8@gmail.com [Nuclear Engineering Program, The Ohio State University, Columbus, OH 43210 (United States); Arcilesi, David J., E-mail: arcilesi.1@osu.edu [Nuclear Engineering Program, The Ohio State University, Columbus, OH 43210 (United States); Kim, In H., E-mail: ihkim0730@gmail.com [Nuclear Engineering Program, The Ohio State University, Columbus, OH 43210 (United States); Sun, Xiaodong, E-mail: sun.200@osu.edu [Nuclear Engineering Program, The Ohio State University, Columbus, OH 43210 (United States); Christensen, Richard N., E-mail: rchristensen@uidaho.edu [Nuclear Engineering Program, The Ohio State University, Columbus, OH 43210 (United States); Oh, Chang H. [Idaho National Laboratory, Idaho Falls, ID 83402 (United States); Kim, Eung S., E-mail: kes7741@snu.ac.kr [Idaho National Laboratory, Idaho Falls, ID 83402 (United States)

    2016-04-15

    Highlights: • Uncertainty quantification and benchmark study are performed to validate an ANSYS FLUENT computer model for a depressurization process in a high-temperature gas-cooled reactor. • An ANSYS FLUENT computer model of a 1/8th scaled-down geometry of a VHTR hot exit plenum is presented, which is similar to the experimental test facility that has been constructed at The Ohio State University. • Using the computer model of the scaled-down geometry, the effects of the depressurization process and flow oscillations on the subsequent density-driven stratified flow phenomenology are examined computationally. • The effects of the scaled-down hot exit plenum internal structure temperature on the density-driven stratified flow phenomenology are investigated numerically. - Abstract: An air-ingress accident is considered to be one of the design basis accidents of a very high-temperature gas-cooled reactor (VHTR). The air-ingress accident is initiated, in its worst-case scenario, by a complete break of the hot duct in what is referred to as a double-ended guillotine break. This leads to an initial loss of the primary helium coolant via depressurization. Following the depressurization process, the air–helium mixture in the reactor cavity could enter the reactor core via the hot duct and hot exit plenum. In the event that air ingresses into the reactor vessel, the high-temperature graphite structures in the reactor core and hot plenum will chemically react with the air, which could lead to damage of in-core graphite structures and fuel, release of carbon monoxide and carbon dioxide, core heat up, failure of the structural integrity of the system, and eventually the release of radionuclides to the environment. Studies in the available literature focus on the phenomena of the air ingress accident that occur after the termination of the depressurization, such as density-driven stratified flow, molecular diffusion, and natural circulation. However, a recent study

  15. Ecological hierarchies and self-organisation - Pattern analysis, modelling and process integration across scales

    Science.gov (United States)

    Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.

    2010-01-01

    A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.

  16. An Improved Scale-Adaptive Simulation Model for Massively Separated Flows

    Directory of Open Access Journals (Sweden)

    Yue Liu

    2018-01-01

    Full Text Available A new hybrid modelling method termed improved scale-adaptive simulation (ISAS is proposed by introducing the von Karman operator into the dissipation term of the turbulence scale equation, proper derivation as well as constant calibration of which is presented, and the typical circular cylinder flow at Re = 3900 is selected for validation. As expected, the proposed ISAS approach with the concept of scale-adaptive appears more efficient than the original SAS method in obtaining a convergent resolution, meanwhile, comparable with DES in visually capturing the fine-scale unsteadiness. Furthermore, the grid sensitivity issue of DES is encouragingly remedied benefiting from the local-adjusted limiter. The ISAS simulation turns out to attractively represent the development of the shear layers and the flow profiles of the recirculation region, and thus, the focused statistical quantities such as the recirculation length and drag coefficient are closer to the available measurements than DES and SAS outputs. In general, the new modelling method, combining the features of DES and SAS concepts, is capable to simulate turbulent structures down to the grid limit in a simple and effective way, which is practically valuable for engineering flows.

  17. Multivariate Spatio-Temporal Clustering: A Framework for Integrating Disparate Data to Understand Network Representativeness and Scaling Up Sparse Ecosystem Measurements

    Science.gov (United States)

    Hoffman, F. M.; Kumar, J.; Maddalena, D. M.; Langford, Z.; Hargrove, W. W.

    2014-12-01

    Disparate in situ and remote sensing time series data are being collected to understand the structure and function of ecosystems and how they may be affected by climate change. However, resource and logistical constraints limit the frequency and extent of observations, particularly in the harsh environments of the arctic and the tropics, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent variability at desired scales. These regions host large areas of potentially vulnerable ecosystems that are poorly represented in Earth system models (ESMs), motivating two new field campaigns, called Next Generation Ecosystem Experiments (NGEE) for the Arctic and Tropics, funded by the U.S. Department of Energy. Multivariate Spatio-Temporal Clustering (MSTC) provides a quantitative methodology for stratifying sampling domains, informing site selection, and determining the representativeness of measurement sites and networks. We applied MSTC to down-scaled general circulation model results and data for the State of Alaska at a 4 km2 resolution to define maps of ecoregions for the present (2000-2009) and future (2090-2099), showing how combinations of 37 bioclimatic characteristics are distributed and how they may shift in the future. Optimal representative sampling locations were identified on present and future ecoregion maps, and representativeness maps for candidate sampling locations were produced. We also applied MSTC to remotely sensed LiDAR measurements and multi-spectral imagery from the WorldView-2 satellite at a resolution of about 5 m2 within the Barrow Environmental Observatory (BEO) in Alaska. At this resolution, polygonal ground features—such as centers, edges, rims, and troughs—can be distinguished. Using these remote sensing data, we up-scaled vegetation distribution data collected on these polygonal ground features to a large area of the BEO to provide distributions of plant functional types that can

  18. Representative elements: A step to large-scale fracture system simulation

    International Nuclear Information System (INIS)

    Clemo, T.M.

    1987-01-01

    Large-scale simulation of flow and transport in fractured media requires the development of a technique to represent the effect of a large number of fractures. Representative elements are used as a tool to model a subset of a fracture system as a single distributed entity. Representative elements are part of a modeling concept called dual permeability. Dual permeability modeling combines discrete fracture simulation of the most important fractures with the distributed modeling of the less important fracture of a fracture system. This study investigates the use of stochastic analysis to determine properties of representative elements. Given an assumption of fully developed laminar flow, the net fracture conductivities and hence flow velocities can be determined from descriptive statistics of fracture spacing, orientation, aperture, and extent. The distribution of physical characteristics about their mean leads to a distribution of the associated conductivities. The variance of hydraulic conductivity induces dispersion into the transport process. Simple fracture systems are treated to demonstrate the usefulness of stochastic analysis. Explicit equations for conductivity of an element are developed and the dispersion characteristics are shown. Explicit formulation of the hydraulic conductivity and transport dispersion reveals the dependence of these important characteristics on the parameters used to describe the fracture system. Understanding these dependencies will help to focus efforts to identify the characteristics of fracture systems. Simulations of stochastically generated fracture sets do not provide this explicit functional dependence on the fracture system parameters. 12 refs., 6 figs

  19. Benchmarking energy scenarios for China: perspectives from top-down, economic and bottom-up, technical modelling

    DEFF Research Database (Denmark)

    This study uses a soft-linking methodology to harmonise two complex global top-down and bottom-up models with a regional China focus. The baseline follows the GDP and demographic trends of the Shared Socio-economic Pathways (SSP2) scenario, down-scaled for China, while the carbon tax scenario fol......-specific modelling results further. These new sub-regional China features can now be used for a more detailed analysis of China's regional developments in a global context....

  20. Analysis and design of type b package tie-down systems

    International Nuclear Information System (INIS)

    Phalippou, C.; Tombini, C.; Tanguy, L.

    1993-01-01

    In order to analyse the incidence of tie-down conditions as a cause of road accidents and to advise carriers on methods of calculating the risk, the French Atomic Energy Commission (CEA), within the framework of a research contract financed by the European Community, conducted a survey into road accidents in which B type packages were involved. After analysis of the survey results, the CEA then conducted reduced scale tests on representative models to establish design rules for tie-down systems. These rules have been the subject of various publications and have at last resulted in the production of a software aid to the design and monitoring of tie-down systems. This document states the various stages involved in this work and the way in which the ARRIMAGE software is arranged. (J.P.N.)

  1. Estimating surface water concentrations of “down-the-drain” chemicals in China using a global model

    International Nuclear Information System (INIS)

    Whelan, M.J.; Hodges, J.E.N.; Williams, R.J.; Keller, V.D.J.; Price, O.R.; Li, M.

    2012-01-01

    Predictions of surface water exposure to “down-the-drain” chemicals are presented which employ grid-based spatially-referenced data on average monthly runoff, population density, country-specific per capita domestic water and substance use rates and sewage treatment provision. Water and chemical load are routed through the landscape using flow directions derived from digital elevation data, accounting for in-stream chemical losses using simple first order kinetics. Although the spatial and temporal resolution of the model are relatively coarse, the model still has advantages over spatially inexplicit “unit-world” approaches, which apply arbitrary dilution factors, in terms of predicting the location of exposure hotspots and the statistical distribution of concentrations. The latter can be employed in probabilistic risk assessments. Here the model was applied to predict surface water exposure to “down-the-drain” chemicals in China for different levels of sewage treatment provision. Predicted spatial patterns of concentration were consistent with observed water quality classes for China. - Highlights: ► A global-scale model of “down-the-drain” chemical concentrations is presented. ► The model was used to predict spatial patterns of exposure in China. ► Predictions were consistent with observed water quality classes. ► The model can identify hotspots and statistical distributions of concentrations. - A global-scale model was used to predict spatial patterns of “down-the-drain” chemical concentrations in China. Predictions were consistent with observed water quality classes, demonstrating the potential value of the model.

  2. Bioprocess scale-up/down as integrative enabling technology: from fluid mechanics to systems biology and beyond.

    Science.gov (United States)

    Delvigne, Frank; Takors, Ralf; Mudde, Rob; van Gulik, Walter; Noorman, Henk

    2017-09-01

    Efficient optimization of microbial processes is a critical issue for achieving a number of sustainable development goals, considering the impact of microbial biotechnology in agrofood, environment, biopharmaceutical and chemical industries. Many of these applications require scale-up after proof of concept. However, the behaviour of microbial systems remains unpredictable (at least partially) when shifting from laboratory-scale to industrial conditions. The need for robust microbial systems is thus highly needed in this context, as well as a better understanding of the interactions between fluid mechanics and cell physiology. For that purpose, a full scale-up/down computational framework is already available. This framework links computational fluid dynamics (CFD), metabolic flux analysis and agent-based modelling (ABM) for a better understanding of the cell lifelines in a heterogeneous environment. Ultimately, this framework can be used for the design of scale-down simulators and/or metabolically engineered cells able to cope with environmental fluctuations typically found in large-scale bioreactors. However, this framework still needs some refinements, such as a better integration of gas-liquid flows in CFD, and taking into account intrinsic biological noise in ABM. © 2017 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  3. Top-down or bottom-up modelling. An application to CO2 abatement

    International Nuclear Information System (INIS)

    Laroui, F.; Van Leeuwen, M.J.

    1995-06-01

    In four articles a comparison is made of bottom-up, or engineers'' models, and top-down models, which comprise macro-econometric models, computable general equilibrium models and also models in the system dynamics tradition. In the first article the history of economic modelling is outlined. In the second article the multi-sector macro-economic Computable General Equilibrium model for the Netherlands is described. It can be used to study the long-term effects of fiscal policy measures on economic and environmental indicators, in particular the effects on the level of CO2-emissions. The aim of article 3 is to describe the structure of the electricity supply industry in the UK and how it can be represented in a bottom-up sub-model within a more general E3 sectoral model of the UK economy. The objective of the last paper (4) is mainly a methodological discussion about integrating top-down and bottom-up models which can be used to assess CO2 abatement policies impacts on economic activity

  4. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  5. The COMFORT-behavior scale is useful to assess pain and distress in 0- to 3-year-old children with Down syndrome.

    Science.gov (United States)

    Valkenburg, Abraham J; Boerlage, Anneke A; Ista, Erwin; Duivenvoorden, Hugo J; Tibboel, Dick; van Dijk, Monique

    2011-09-01

    Many pediatric intensive care units use the COMFORT-Behavior scale (COMFORT-B) to assess pain in 0- to 3-year-old children. The objective of this study was to determine whether this scale is also valid for the assessment of pain in 0- to 3-year-old children with Down syndrome. These children often undergo cardiac or intestinal surgery early in life and therefore admission to a pediatric intensive care unit. Seventy-six patients with Down syndrome were included and 466 without Down syndrome. Pain was regularly assessed with the COMFORT-B scale and the pain Numeric Rating Scale (NRS). For either group, confirmatory factor analyses revealed a 1-factor model. Internal consistency between COMFORT-B items was good (Cronbach's α=0.84-0.87). Cutoff values for the COMFORT-B set at 17 or higher discriminated between pain (NRS pain of 4 or higher) and no pain (NRS pain below 4) in both groups. We concluded that the COMFORT-B scale is also valid for 0- to 3-year-old children with Down syndrome. This makes it even more useful in the pediatric intensive care unit setting, doing away with the need to apply another instrument for those children younger than 3. Copyright © 2011 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  6. Frontal impact response of a virtual low percentile six years old human thorax developed by automatic down-scaling

    Directory of Open Access Journals (Sweden)

    Špička J.

    2015-06-01

    Full Text Available Traffic accidents cause one of the highest numbers of severe injuries in the whole population spectrum. The numbers of deaths and seriously injured citizens prove that traffic accidents and their consequences are still a serious problem to be solved. The paper contributes to the field of vehicle safety technology with a virtual approach. Exploitation of the previously developed scaling algorithm enables the creation of a specific anthropometric model based on a validated reference model. The aim of the paper is to prove the biofidelity of the small percentile six years old virtual human model developed by automatic down-scaling in a frontal impact. For the automatically developed six years old virtual specific anthropometric model, the Kroell impact test is simulated and the results are compared to the experimental data. The chosen approach shows good correspondence of the scaled model performance to the experimental corridors.

  7. Statistical properties of fluctuations of time series representing appearances of words in nationwide blog data and their applications: An example of modeling fluctuation scalings of nonstationary time series.

    Science.gov (United States)

    Watanabe, Hayafumi; Sano, Yukie; Takayasu, Hideki; Takayasu, Misako

    2016-11-01

    To elucidate the nontrivial empirical statistical properties of fluctuations of a typical nonsteady time series representing the appearance of words in blogs, we investigated approximately 3×10^{9} Japanese blog articles over a period of six years and analyze some corresponding mathematical models. First, we introduce a solvable nonsteady extension of the random diffusion model, which can be deduced by modeling the behavior of heterogeneous random bloggers. Next, we deduce theoretical expressions for both the temporal and ensemble fluctuation scalings of this model, and demonstrate that these expressions can reproduce all empirical scalings over eight orders of magnitude. Furthermore, we show that the model can reproduce other statistical properties of time series representing the appearance of words in blogs, such as functional forms of the probability density and correlations in the total number of blogs. As an application, we quantify the abnormality of special nationwide events by measuring the fluctuation scalings of 1771 basic adjectives.

  8. Modeling industrial centrifugation of mammalian cell culture using a capillary based scale-down system.

    Science.gov (United States)

    Westoby, Matthew; Rogers, Jameson K; Haverstock, Ryan; Romero, Jonathan; Pieracci, John

    2011-05-01

    Continuous-flow centrifugation is widely utilized as the primary clarification step in the recovery of biopharmaceuticals from cell culture. However, it is a challenging operation to develop and characterize due to the lack of easy to use, small-scale, systems that can be used to model industrial processes. As a result, pilot-scale continuous centrifugation is typically employed to model large-scale systems requiring a significant amount of resources. In an effort to reduce resource requirements and create a system which is easy to construct and utilize, a capillary shear device, capable of producing energy dissipation rates equivalent to those present in the feed zones of industrial disk stack centrifuges, was developed and evaluated. When coupled to a bench-top, batch centrifuge, the capillary device reduced centrate turbidity prediction error from 37% to 4% compared to using a bench-top centrifuge alone. Laboratory-scale parameters that are analogous to those routinely varied during industrial-scale continuous centrifugation were identified and evaluated for their utility in emulating disk stack centrifuge performance. The resulting relationships enable bench-scale process modeling of continuous disk stack centrifuges using an easily constructed, scalable, capillary shear device coupled to a typical bench-top centrifuge. Copyright © 2010 Wiley Periodicals, Inc.

  9. New scale-down methodology from commercial to lab scale to optimize plant-derived soft gel capsule formulations on a commercial scale.

    Science.gov (United States)

    Oishi, Sana; Kimura, Shin-Ichiro; Noguchi, Shuji; Kondo, Mio; Kondo, Yosuke; Shimokawa, Yoshiyuki; Iwao, Yasunori; Itai, Shigeru

    2018-01-15

    A new scale-down methodology from commercial rotary die scale to laboratory scale was developed to optimize a plant-derived soft gel capsule formulation and eventually manufacture superior soft gel capsules on a commercial scale, in order to reduce the time and cost for formulation development. Animal-derived and plant-derived soft gel film sheets were prepared using an applicator on a laboratory scale and their physicochemical properties, such as tensile strength, Young's modulus, and adhesive strength, were evaluated. The tensile strength of the animal-derived and plant-derived soft gel film sheets was 11.7 MPa and 4.41 MPa, respectively. The Young's modulus of the animal-derived and plant-derived soft gel film sheets was 169 MPa and 17.8 MPa, respectively, and both sheets showed a similar adhesion strength of approximately 4.5-10 MPa. Using a D-optimal mixture design, plant-derived soft gel film sheets were prepared and optimized by varying their composition, including variations in the mass of κ-carrageenan, ι-carrageenan, oxidized starch and heat-treated starch. The physicochemical properties of the sheets were evaluated to determine the optimal formulation. Finally, plant-derived soft gel capsules were manufactured using the rotary die method and the prepared soft gel capsules showed equivalent or superior physical properties compared with pre-existing soft gel capsules. Therefore, we successfully developed a new scale-down methodology to optimize the formulation of plant-derived soft gel capsules on a commercial scale. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Scaling and design analyses of a scaled-down, high-temperature test facility for experimental investigation of the initial stages of a VHTR air-ingress accident

    International Nuclear Information System (INIS)

    Arcilesi, David J.; Ham, Tae Kyu; Kim, In Hun; Sun, Xiaodong; Christensen, Richard N.; Oh, Chang H.

    2015-01-01

    Highlights: • A 1/8th geometric-scale test facility that models the VHTR hot plenum is proposed. • Geometric scaling analysis is introduced for VHTR to analyze air-ingress accident. • Design calculations are performed to show that accident phenomenology is preserved. • Some analyses include time scale, hydraulic similarity and power scaling analysis. • Test facility has been constructed and shake-down tests are currently being carried out. - Abstract: A critical event in the safety analysis of the very high-temperature gas-cooled reactor (VHTR) is an air-ingress accident. This accident is initiated, in its worst case scenario, by a double-ended guillotine break of the coaxial cross vessel, which leads to a rapid reactor vessel depressurization. In a VHTR, the reactor vessel is located within a reactor cavity that is filled with air during normal operating conditions. Following the vessel depressurization, the dominant mode of ingress of an air–helium mixture into the reactor vessel will either be molecular diffusion or density-driven stratified flow. The mode of ingress is hypothesized to depend largely on the break conditions of the cross vessel. Since the time scales of these two ingress phenomena differ by orders of magnitude, it is imperative to understand under which conditions each of these mechanisms will dominate in the air ingress process. Computer models have been developed to analyze this type of accident scenario. There are, however, limited experimental data available to understand the phenomenology of the air-ingress accident and to validate these models. Therefore, there is a need to design and construct a scaled-down experimental test facility to simulate the air-ingress accident scenarios and to collect experimental data. The current paper focuses on the analyses performed for the design and operation of a 1/8th geometric scale (by height and diameter), high-temperature test facility. A geometric scaling analysis for the VHTR, a time

  11. Modeling Optimal Cutoffs for the Brazilian Household Food Insecurity Measurement Scale in a Nationwide Representative Sample.

    Science.gov (United States)

    Interlenghi, Gabriela S; Reichenheim, Michael E; Segall-Corrêa, Ana M; Pérez-Escamilla, Rafael; Moraes, Claudia L; Salles-Costa, Rosana

    2017-07-01

    Background: This is the second part of a model-based approach to examine the suitability of the current cutoffs applied to the raw score of the Brazilian Household Food Insecurity Measurement Scale [Escala Brasileira de Insegurança Alimentar (EBIA)]. The approach allows identification of homogeneous groups who correspond to severity levels of food insecurity (FI) and, by extension, discriminant cutoffs able to accurately distinguish these groups. Objective: This study aims to examine whether the model-based approach for identifying optimal cutoffs first implemented in a local sample is replicated in a countrywide representative sample. Methods: Data were derived from the Brazilian National Household Sample Survey of 2013 ( n = 116,543 households). Latent class factor analysis (LCFA) models from 2 to 5 classes were applied to the scale's items to identify the number of underlying FI latent classes. Next, identification of optimal cutoffs on the overall raw score was ascertained from these identified classes. Analyses were conducted in the aggregate data and by macroregions. Finally, model-based classifications (latent classes and groupings identified thereafter) were contrasted to the traditionally used classification. Results: LCFA identified 4 homogeneous groups with a very high degree of class separation (entropy = 0.934-0.975). The following cutoffs were identified in the aggregate data: between 1 and 2 (1/2), 5 and 6 (5/6), and 10 and 11 (10/11) in households with children and/or adolescents category emerged consistently in all analyses. Conclusions: Nationwide findings corroborate previous local evidence that households with an overall score of 1 are more akin to those scoring negative on all items. These results may contribute to guide experts' and policymakers' decisions on the most appropriate EBIA cutoffs. © 2017 American Society for Nutrition.

  12. Thermal stratification in a scaled-down suppression pool of the Fukushima Daiichi nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Byeongnam, E-mail: jo@vis.t.u-tokyo.ac.jp [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan); Erkan, Nejdet [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan); Takahashi, Shinji [Department of Nuclear Engineering and Management, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Song, Daehun [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan); Hyundai and Kia Corporate R& D Division, Hyundai Motors, 772-1, Jangduk-dong, Hwaseong-Si, Gyeonggi-Do 445-706 (Korea, Republic of); Sagawa, Wataru; Okamoto, Koji [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan)

    2016-08-15

    Highlights: • Thermal stratification was reproduced in a scaled-down suppression pool of the Fukushima Daiichi nuclear power plants. • Horizontal temperature profiles were uniform in the toroidal suppression pool. • Subcooling-steam flow rate map of thermal stratification was obtained. • Steam bubble-induced flow model in suppression pool was suggested. • Bubble frequency strongly depends on the steam flow rate. - Abstract: Thermal stratification in the suppression pool of the Fukushima Daiichi nuclear power plants was experimentally investigated in sub-atmospheric pressure conditions using a 1/20 scale torus shaped setup. The thermal stratification was reproduced in the scaled-down suppression pool and the effect of the steam flow rate on different thermal stratification behaviors was examined for a wide range of steam flow rates. A sparger-type steam injection pipe that emulated Fukushima Daiichi Unit 3 (F1U3) was used. The steam was injected horizontally through 132 holes. The development (formation and disappearance) of thermal stratification was significantly affected by the steam flow rate. Interestingly, the thermal stratification in the suppression pool vanished when subcooling became lower than approximately 5 °C. This occurred because steam bubbles are not well condensed at low subcooling temperatures; therefore, those bubbles generate significant upward momentum, leading to mixing of the water in the suppression pool.

  13. 1/3-scale model testing program

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.

    1989-01-01

    This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system

  14. The Association between Audit Business Scale Advantage and Audit Quality of Asset Write-downs

    Directory of Open Access Journals (Sweden)

    Ziye Zhao

    2008-06-01

    We contribute to the literature with the following findings. First, auditors’ business scale is positively related to return relevance of write-downs. Second, auditors with ABSA not only enhance the relevance between impairments and economic variables but also weaken the relation between impairments and managerial variables; however, the results appear in only a few of the firm-specific variables. Third, results are mixed when we test the ABSA effect on price-relevance and persistence dimensions. Fourth, the ABSA effect is stronger when the complexity of asset write-downs requires some inside information to comprehend the nature of action. Adding to the main finding, we also found the ABSA effect became weaker when we proxy ABSA with raw data of companies’ business scale instead of the top five auditors in business scale. Taken together, our results show that the ABSA effect does exist in auditing of assets write-downs, although with weak evidence. Our results also indicated rational auditor choice based on quality of service in China's audit market. We identified some unique factors from stakeholders’ cooperative structuring actions in China audit market as potential explanations to the market rationality.

  15. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...

  16. The top-down reflooding model in the Cathare code

    International Nuclear Information System (INIS)

    Bartak, J.; Bestion, D.; Haapalehto, T.

    1993-01-01

    A top-down reflooding model was developed for the French best-estimate thermalhydraulic code CATHARE. The paper presents the current state of development of this model. Based on a literature survey and on compatibility considerations with respect to the existing CATHARE bottom reflooding package, a falling film top-down reflooding model was developed and implemented into CATHARE version 1.3E. Following a brief review of previous work, the paper describes the most important features of the model. The model was validated with the WINFRITH single tube top-down reflooding experiment and with the REWET - II simultaneous bottom and top-down reflooding experiment in rod bundle geometry. The results demonstrate the ability of the new package to describe the falling film rewetting phenomena and the main parametric trends both in a simple analytical experimental setup and in a much more complex rod bundle reflooding experiment. (authors). 9 figs., 28 refs

  17. Development of in situ product removal strategies in biocatalysis applying scaled-down unit operations.

    Science.gov (United States)

    Heintz, Søren; Börner, Tim; Ringborg, Rolf H; Rehn, Gustav; Grey, Carl; Nordblad, Mathias; Krühne, Ulrich; Gernaey, Krist V; Adlercreutz, Patrick; Woodley, John M

    2017-03-01

    An experimental platform based on scaled-down unit operations combined in a plug-and-play manner enables easy and highly flexible testing of advanced biocatalytic process options such as in situ product removal (ISPR) process strategies. In such a platform, it is possible to compartmentalize different process steps while operating it as a combined system, giving the possibility to test and characterize the performance of novel process concepts and biocatalysts with minimal influence of inhibitory products. Here the capabilities of performing process development by applying scaled-down unit operations are highlighted through a case study investigating the asymmetric synthesis of 1-methyl-3-phenylpropylamine (MPPA) using ω-transaminase, an enzyme in the sub-family of amino transferases (ATAs). An on-line HPLC system was applied to avoid manual sample handling and to semi-automatically characterize ω-transaminases in a scaled-down packed-bed reactor (PBR) module, showing MPPA as a strong inhibitor. To overcome the inhibition, a two-step liquid-liquid extraction (LLE) ISPR concept was tested using scaled-down unit operations combined in a plug-and-play manner. Through the tested ISPR concept, it was possible to continuously feed the main substrate benzylacetone (BA) and extract the main product MPPA throughout the reaction, thereby overcoming the challenges of low substrate solubility and product inhibition. The tested ISPR concept achieved a product concentration of 26.5 g MPPA  · L -1 , a purity up to 70% g MPPA  · g tot -1 and a recovery in the range of 80% mol · mol -1 of MPPA in 20 h, with the possibility to increase the concentration, purity, and recovery further. Biotechnol. Bioeng. 2017;114: 600-609. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. Thermophysical properties of lignocellulose: a cell-scale study down to 41 K.

    Science.gov (United States)

    Cheng, Zhe; Xu, Zaoli; Zhang, Lei; Wang, Xinwei

    2014-01-01

    Thermal energy transport is of great importance in lignocellulose pyrolysis for biofuels. The thermophysical properties of lignocellulose significantly affect the overall properties of bio-composites and the related thermal transport. In this work, cell-scale lignocellulose (mono-layer plant cells) is prepared to characterize their thermal properties from room temperature down to ∼ 40 K. The thermal conductivities of cell-scale lignocellulose along different directions show a little anisotropy due to the cell structure anisotropy. It is found that with temperature going down, the volumetric specific heat of the lignocellulose shows a slower decreasing trend against temperature than microcrystalline cellulose, and its value is always higher than that of microcrystalline cellulose. The thermal conductivity of lignocellulose decreases with temperature from 243 K to 317 K due to increasing phonon-phonon scatterings. From 41 K to 243 K, the thermal conductivity rises with temperature and its change mainly depends on the heat capacity's change.

  19. Thermophysical properties of lignocellulose: a cell-scale study down to 41 K.

    Directory of Open Access Journals (Sweden)

    Zhe Cheng

    Full Text Available Thermal energy transport is of great importance in lignocellulose pyrolysis for biofuels. The thermophysical properties of lignocellulose significantly affect the overall properties of bio-composites and the related thermal transport. In this work, cell-scale lignocellulose (mono-layer plant cells is prepared to characterize their thermal properties from room temperature down to ∼ 40 K. The thermal conductivities of cell-scale lignocellulose along different directions show a little anisotropy due to the cell structure anisotropy. It is found that with temperature going down, the volumetric specific heat of the lignocellulose shows a slower decreasing trend against temperature than microcrystalline cellulose, and its value is always higher than that of microcrystalline cellulose. The thermal conductivity of lignocellulose decreases with temperature from 243 K to 317 K due to increasing phonon-phonon scatterings. From 41 K to 243 K, the thermal conductivity rises with temperature and its change mainly depends on the heat capacity's change.

  20. The Demand Side in Economic Models of Energy Markets: The Challenge of Representing Consumer Behavior

    Energy Technology Data Exchange (ETDEWEB)

    Krysiak, Frank C., E-mail: frank.krysiak@unibas.ch; Weigt, Hannes [Department of Business and Economics, University of Basel, Basel (Switzerland)

    2015-05-19

    Energy models play an increasing role in the ongoing energy transition processes either as tools for forecasting potential developments or for assessments of policy and market design options. In recent years, these models have increased in scope and scale and provide a reasonable representation of the energy supply side, technological aspects and general macroeconomic interactions. However, the representation of the demand side and consumer behavior has remained rather simplistic. The objective of this paper is twofold. First, we review existing large-scale energy model approaches, namely bottom-up and top-down models, with respect to their demand-side representation. Second, we identify gaps in existing approaches and draft potential pathways to account for a more detailed demand-side and behavior representation in energy modeling.

  1. The Demand Side in Economic Models of Energy Markets: The Challenge of Representing Consumer Behavior

    Directory of Open Access Journals (Sweden)

    Frank eKrysiak

    2015-05-01

    Full Text Available Energy models play an increasing role in the ongoing energy transition processes either as tools for forecasting potential developments or for assessments of policy and market design options. In recent years these models have increased in scope and scale and provide a reasonable representation of the energy supply side, technological aspects and general macroeconomic interactions. However, the representation of the demand side and consumer behavior has remained rather simplistic. The objective of this paper is twofold. First, we review existing large scale energy model approaches, namely bottom-up and top-down models, with respect to their demand side representation. Second, we identify gaps in existing approaches and draft potential pathways to account for a more detailed demand side and behavior representation in energy modeling.

  2. The Demand Side in Economic Models of Energy Markets: The Challenge of Representing Consumer Behavior

    International Nuclear Information System (INIS)

    Krysiak, Frank C.; Weigt, Hannes

    2015-01-01

    Energy models play an increasing role in the ongoing energy transition processes either as tools for forecasting potential developments or for assessments of policy and market design options. In recent years, these models have increased in scope and scale and provide a reasonable representation of the energy supply side, technological aspects and general macroeconomic interactions. However, the representation of the demand side and consumer behavior has remained rather simplistic. The objective of this paper is twofold. First, we review existing large-scale energy model approaches, namely bottom-up and top-down models, with respect to their demand-side representation. Second, we identify gaps in existing approaches and draft potential pathways to account for a more detailed demand-side and behavior representation in energy modeling.

  3. Green fluorescent protein (GFP) leakage from microbial biosensors provides useful information for the evaluation of the scale-down effect

    DEFF Research Database (Denmark)

    Delvigne, Frank; Brognaux, Alison; Francis, Frédéric

    2011-01-01

    Mixing deficiencies can be potentially detected by the use of a dedicated whole cell microbial biosensor. In this work, a csiE promoter induced under carbon-limited conditions was involved in the elaboration of such biosensor. The cisE biosensor exhibited interesting response after up and down......-shift of the dilution rate in chemostat mode. Glucose limitation was accompanied by green fluorescent protein (GFP) leakage to the extracellular medium. In order to test the responsiveness of microbial biosensors to substrate fluctuations in large-scale, a scale-down reactor (SDR) experiment was performed. The glucose...... fluctuations were characterized at the single cell level and tend to decrease the induction of GFP. Simulations run on the basis of a stochastic hydrodynamic model have shown the variability and the frequencies at which biosensors are exposed to glucose gradient in the SDR. GFP leakage was observed to a great...

  4. Microfabricated modular scale-down device for regenerative medicine process development.

    Directory of Open Access Journals (Sweden)

    Marcel Reichen

    Full Text Available The capacity of milli and micro litre bioreactors to accelerate process development has been successfully demonstrated in traditional biotechnology. However, for regenerative medicine present smaller scale culture methods cannot cope with the wide range of processing variables that need to be evaluated. Existing microfabricated culture devices, which could test different culture variables with a minimum amount of resources (e.g. expensive culture medium, are typically not designed with process development in mind. We present a novel, autoclavable, and microfabricated scale-down device designed for regenerative medicine process development. The microfabricated device contains a re-sealable culture chamber that facilitates use of standard culture protocols, creating a link with traditional small-scale culture devices for validation and scale-up studies. Further, the modular design can easily accommodate investigation of different culture substrate/extra-cellular matrix combinations. Inactivated mouse embryonic fibroblasts (iMEF and human embryonic stem cell (hESC colonies were successfully seeded on gelatine-coated tissue culture polystyrene (TC-PS using standard static seeding protocols. The microfluidic chip included in the device offers precise and accurate control over the culture medium flow rate and resulting shear stresses in the device. Cells were cultured for two days with media perfused at 300 µl.h(-1 resulting in a modelled shear stress of 1.1×10(-4 Pa. Following perfusion, hESC colonies stained positively for different pluripotency markers and retained an undifferentiated morphology. An image processing algorithm was developed which permits quantification of co-cultured colony-forming cells from phase contrast microscope images. hESC colony sizes were quantified against the background of the feeder cells (iMEF in less than 45 seconds for high-resolution images, which will permit real-time monitoring of culture progress in future

  5. Direct Down-scale Experiments of Concentration Column Designs for SHINE Process

    Energy Technology Data Exchange (ETDEWEB)

    Youker, Amanda J. [Argonne National Lab. (ANL), Argonne, IL (United States); Stepinski, Dominique C. [Argonne National Lab. (ANL), Argonne, IL (United States); Vandegrift, George F. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-05-01

    Argonne is assisting SHINE Medical Technologies in their efforts to become a domestic Mo-99 producer. The SHINE accelerator-driven process uses a uranyl-sulfate target solution for the production of fission-product Mo-99. Argonne has developed a molybdenum recovery and purification process for this target solution. The process includes an initial Mo recovery column followed by a concentration column to reduce the product volume from 15-25 L to < 1 L prior to entry into the LEU Modified Cintichem (LMC) process for purification.1 This report discusses direct down-scale experiments of the plant-scale concentration column design, where the effects of loading velocity and temperature were investigated.

  6. Reconciling Basin-Scale Top-Down and Bottom-Up Methane Emission Measurements for Onshore Oil and Gas Development: Cooperative Research and Development Final Report, CRADA Number CRD-14-572

    Energy Technology Data Exchange (ETDEWEB)

    Heath, Garvin A. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-12-04

    The overall objective of the Research Partnership to Secure Energy for America (RPSEA)-funded research project is to develop independent estimates of methane emissions using top-down and bottom-up measurement approaches and then to compare the estimates, including consideration of uncertainty. Such approaches will be applied at two scales: basin and facility. At facility scale, multiple methods will be used to measure methane emissions of the whole facility (controlled dual tracer and single tracer releases, aircraft-based mass balance and Gaussian back-trajectory), which are considered top-down approaches. The bottom-up approach will sum emissions from identified point sources measured using appropriate source-level measurement techniques (e.g., high-flow meters). At basin scale, the top-down estimate will come from boundary layer airborne measurements upwind and downwind of the basin, using a regional mass balance model plus approaches to separate atmospheric methane emissions attributed to the oil and gas sector. The bottom-up estimate will result from statistical modeling (also known as scaling up) of measurements made at selected facilities, with gaps filled through measurements and other estimates based on other studies. The relative comparison of the bottom-up and top-down estimates made at both scales will help improve understanding of the accuracy of the tested measurement and modeling approaches. The subject of this CRADA is NREL's contribution to the overall project. This project resulted from winning a competitive solicitation no. RPSEA RFP2012UN001, proposal no. 12122-95, which is the basis for the overall project. This Joint Work Statement (JWS) details the contributions of NREL and Colorado School of Mines (CSM) in performance of the CRADA effort.

  7. Cavitation on a scaled-down model of a Francis turbine guide vane: high-speed imaging and PIV measurements

    Science.gov (United States)

    Pervunin, K. S.; Timoshevskiy, M. V.; Churkin, S. A.; Kravtsova, A. Yu; Markovich, D. M.; Hanjalić, K.

    2015-12-01

    Cavitation on two symmetric foils, a NACA0015 hydrofoil and a scaled-down model of a Francis turbine guide vane (GV), was investigated by high-speed visualization and PIV. At small attack angles the differences between the profiles of the mean and fluctuating velocities for both hydrofoils were shown to be insignificant. However, at the higher angle of incidence, flow separation from the GV surface was discovered for quasi-steady regimes including cavitation-free and cavitation inception cases. The flow separation leads to the appearance of a second maximum in velocity fluctuations distributions downstream far from the GV surface. When the transition to unsteady regimes occurred, the velocity distributions became quite similar for both foils. Additionally, for the GV an unsteady regime characterized by asymmetric spanwise variations of the sheet cavity length along with alternating periodic detachments of clouds between the sidewalls of the test channel was for the first time visualized. This asymmetric behaviour is very likely to be governed by the cross instability that was recently described by Decaix and Goncalvès [8]. Moreover, it was concluded that the existence of the cross instability is independent on the test body shape and its aspect ratio.

  8. Large-scale Modeling of Nitrous Oxide Production: Issues of Representing Spatial Heterogeneity

    Science.gov (United States)

    Morris, C. K.; Knighton, J.

    2017-12-01

    Nitrous oxide is produced from the biological processes of nitrification and denitrification in terrestrial environments and contributes to the greenhouse effect that warms Earth's climate. Large scale modeling can be used to determine how global rate of nitrous oxide production and consumption will shift under future climates. However, accurate modeling of nitrification and denitrification is made difficult by highly parameterized, nonlinear equations. Here we show that the representation of spatial heterogeneity in inputs, specifically soil moisture, causes inaccuracies in estimating the average nitrous oxide production in soils. We demonstrate that when soil moisture is averaged from a spatially heterogeneous surface, net nitrous oxide production is under predicted. We apply this general result in a test of a widely-used global land surface model, the Community Land Model v4.5. The challenges presented by nonlinear controls on nitrous oxide are highlighted here to provide a wider context to the problem of extraordinary denitrification losses in CLM. We hope that these findings will inform future researchers on the possibilities for model improvement of the global nitrogen cycle.

  9. Seventh meeting of the Global Alliance to Eliminate Lymphatic Filariasis: reaching the vision by scaling up, scaling down, and reaching out

    Science.gov (United States)

    2014-01-01

    This report summarizes the 7th meeting of the Global Alliance to Eliminate Lymphatic Filariasis (GAELF), Washington DC, November 18–19, 2012. The theme, “A Future Free of Lymphatic Filariasis: Reaching the Vision by Scaling Up, Scaling Down and Reaching Out”, emphasized new strategies and partnerships necessary to reach the 2020 goal of elimination of lymphatic filariasis (LF) as a public-health problem. PMID:24450283

  10. Climatic forecast: down-scaling and extremes

    International Nuclear Information System (INIS)

    Deque, M.; Li, L.

    2007-01-01

    There is a strong demand for specifying the future climate at local scale and about extreme events. New methods, allowing a better output from the climate models, are currently being developed and French laboratories involved in the Escrime project are actively participating. (authors)

  11. Representing biophysical landscape interactions in soil models by bridging disciplines and scales.

    Science.gov (United States)

    van der Ploeg, M. J.; Carranza, C.; Teixeira da Silva, R.; te Brake, B.; Baartman, J.; Robinson, D.

    2017-12-01

    The combination of climate change, population growth and soil threats including carbon loss, biodiversity decline and erosion, increasingly confront the global community (Schwilch et al., 2016). One major challenge in studying processes involved in soil threats, landscape resilience, ecosystem stability, sustainable land management and resulting economic consequences, is that it is an interdisciplinary field (Pelletier et al., 2012). Less stringent scientific disciplinary boundaries are therefore important (Liu et al., 2007), because as a result of disciplinary focus, ambiguity may arise on the understanding of landscape interactions. This is especially true in the interaction between a landscape's physical and biological processes (van der Ploeg et al. 2012). Biophysical landscape interactions are those biotic and abiotic processes in a landscape that have an influence on the developments within and evolution of a landscape. An important aspect in biophysical landscape interactions is the differences in scale related to the various processes that play a role in these systems. Moreover, the interplay between the physical landscape and the occurring vegetation, which often co-evolve, and the resulting heterogeneity and emerging patterns are the reason why it is so challenging to establish a theoretical basis to describe biophysical processes in landscapes (e.g. te Brake et al. 2013, Robinson et al. 2016). Another complicating factor is the response of vegetation to changing environmental conditions, including a possible, and often unknown, time-lag (e.g. Metzger et al., 2009). An integrative description for modelling biophysical interactions has been a long standing goal in soil science (Vereecken et al., 2016). We need the development of soil models that are more focused on networks, connectivity and feedbacks incorporating the most important aspects of our detailed mechanistic modelling (Paola & Leeder, 2011). Additionally, remote sensing measurement techniques

  12. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  13. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  14. PSI-BOIL, a building block towards the multi-scale modeling of flow boiling phenomena

    International Nuclear Information System (INIS)

    Niceno, Bojan; Andreani, Michele; Prasser, Horst-Michael

    2008-01-01

    Full text of publication follows: In these work we report the current status of the Swiss project Multi-scale Modeling Analysis (MSMA), jointly financed by PSI and Swissnuclear. The project aims at addressing the multi-scale (down to nano-scale) modelling of convective boiling phenomena, and the development of physically-based closure laws for the physical scales appropriate to the problem considered, to be used within Computational Fluid Dynamics (CFD) codes. The final goal is to construct a new computational tool, called Parallel Simulator of Boiling phenomena (PSI-BOIL) for the direct simulation of processes all the way down to the small-scales of interest and an improved CFD code for the mechanistic prediction of two-phase flow and heat transfer in the fuel rod bundle of a nuclear reactor. An improved understanding of the physics of boiling will be gained from the theoretical work as well as from novel small- and medium scale experiments targeted to assist the development of closure laws. PSI-BOIL is a computer program designed for efficient simulation of turbulent fluid flow and heat transfer phenomena in simple geometries. Turbulence is simulated directly (DNS) and its efficiency plays a vital role in a successful simulation. Having high performance as one of the main prerequisites, PSIBOIL is tailored in such a way to be as efficient a tool as possible, relying on well-established numerical techniques and sacrificing all the features which are not essential for the success of this project and which might slow down the solution procedure. The governing equations are discretized in space with orthogonal staggered finite volume method. Time discretization is performed with projection method, the most obvious a the most widely used choice for DNS. Systems of linearized equation, stemming from the discretization of governing equations, are solved with the Additive Correction Multigrid (ACM). methods. Two distinguished features of PSI-BOIL are the possibility to

  15. Scale Model Thruster Acoustic Measurement Results

    Science.gov (United States)

    Vargas, Magda; Kenny, R. Jeremy

    2013-01-01

    The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.

  16. Micro-Scale Thermoacoustics

    Science.gov (United States)

    Offner, Avshalom; Ramon, Guy Z.

    2016-11-01

    Thermoacoustic phenomena - conversion of heat to acoustic oscillations - may be harnessed for construction of reliable, practically maintenance-free engines and heat pumps. Specifically, miniaturization of thermoacoustic devices holds great promise for cooling of micro-electronic components. However, as devices size is pushed down to micro-meter scale it is expected that non-negligible slip effects will exist at the solid-fluid interface. Accordingly, new theoretical models for thermoacoustic engines and heat pumps were derived, accounting for a slip boundary condition. These models are essential for the design process of micro-scale thermoacoustic devices that will operate under ultrasonic frequencies. Stability curves for engines - representing the onset of self-sustained oscillations - were calculated with both no-slip and slip boundary conditions, revealing improvement in the performance of engines with slip at the resonance frequency range applicable for micro-scale devices. Maximum achievable temperature differences curves for thermoacoustic heat pumps were calculated, revealing the negative effect of slip on the ability to pump heat up a temperature gradient. The authors acknowledge the support from the Nancy and Stephen Grand Technion Energy Program (GTEP).

  17. Coalescing colony model: Mean-field, scaling, and geometry

    Science.gov (United States)

    Carra, Giulia; Mallick, Kirone; Barthelemy, Marc

    2017-12-01

    We analyze the coalescing model where a `primary' colony grows and randomly emits secondary colonies that spread and eventually coalesce with it. This model describes population proliferation in theoretical ecology, tumor growth, and is also of great interest for modeling urban sprawl. Assuming the primary colony to be always circular of radius r (t ) and the emission rate proportional to r (t) θ , where θ >0 , we derive the mean-field equations governing the dynamics of the primary colony, calculate the scaling exponents versus θ , and compare our results with numerical simulations. We then critically test the validity of the circular approximation for the colony shape and show that it is sound for a constant emission rate (θ =0 ). However, when the emission rate is proportional to the perimeter, the circular approximation breaks down and the roughness of the primary colony cannot be discarded, thus modifying the scaling exponents.

  18. Prenatal treatment prevents learning deficit in Down syndrome model.

    Science.gov (United States)

    Incerti, Maddalena; Horowitz, Kari; Roberson, Robin; Abebe, Daniel; Toso, Laura; Caballero, Madeline; Spong, Catherine Y

    2012-01-01

    Down syndrome is the most common genetic cause of mental retardation. Active fragments of neurotrophic factors release by astrocyte under the stimulation of vasoactive intestinal peptide, NAPVSIPQ (NAP) and SALLRSIPA (SAL) respectively, have shown therapeutic potential for developmental delay and learning deficits. Previous work demonstrated that NAP+SAL prevent developmental delay and glial deficit in Ts65Dn that is a well-characterized mouse model for Down syndrome. The objective of this study is to evaluate if prenatal treatment with these peptides prevents the learning deficit in the Ts65Dn mice. Pregnant Ts65Dn female and control pregnant females were randomly treated (intraperitoneal injection) on pregnancy days 8 through 12 with saline (placebo) or peptides (NAP 20 µg +SAL 20 µg) daily. Learning was assessed in the offspring (8-10 months) using the Morris Watermaze, which measures the latency to find the hidden platform (decrease in latency denotes learning). The investigators were blinded to the prenatal treatment and genotype. Pups were genotyped as trisomic (Down syndrome) or euploid (control) after completion of all tests. two-way ANOVA followed by Neuman-Keuls test for multiple comparisons, PDown syndrome-placebo; n = 11) did not demonstrate learning over the five day period. DS mice that were prenatally exposed to peptides (Down syndrome-peptides; n = 10) learned significantly better than Down syndrome-placebo (ptreatment with the neuroprotective peptides (NAP+SAL) prevented learning deficits in a Down syndrome model. These findings highlight a possibility for the prevention of sequelae in Down syndrome and suggest a potential pregnancy intervention that may improve outcome.

  19. Top-down or bottom-up? Assessing crevassing directions on surging glaciers and developments for physically testing glacier crevassing models.

    Science.gov (United States)

    Rea, B.; Evans, D. J. A.; Benn, D. I.; Brennan, A. J.

    2012-04-01

    preserved. An alternative approach is provided by geotechnical centrifuge modelling. By testing scaled models in an enhanced 'gravity' field real-world (prototype) stress conditions can be reproduced which is crucial for problems governed by self-weight stresses, of which glacier crevassing is one. Scaling relationships have been established for stress intensity factors - KI which are key to determining crevasse penetration such that KIp = √N KIm (p = prototype and m = model). Operating specifications of the University of Dundee geotechnical centrifuge (100g) will allow the testing of scaled models equivalent to prototype glaciers of 50 m thickness in order to provide a physical test of the LEFM top-down crevassing model.

  20. Innovation diffusion equations on correlated scale-free networks

    Energy Technology Data Exchange (ETDEWEB)

    Bertotti, M.L., E-mail: marialetizia.bertotti@unibz.it [Free University of Bozen–Bolzano, Faculty of Science and Technology, Bolzano (Italy); Brunner, J., E-mail: johannes.brunner@tis.bz.it [TIS Innovation Park, Bolzano (Italy); Modanese, G., E-mail: giovanni.modanese@unibz.it [Free University of Bozen–Bolzano, Faculty of Science and Technology, Bolzano (Italy)

    2016-07-29

    Highlights: • The Bass diffusion model can be formulated on scale-free networks. • In the trickle-down version, the hubs adopt earlier and act as monitors. • We improve the equations in order to describe trickle-up diffusion. • Innovation is generated at the network periphery, and hubs can act as stiflers. • We compare diffusion times, in dependence on the scale-free exponent. - Abstract: We introduce a heterogeneous network structure into the Bass diffusion model, in order to study the diffusion times of innovation or information in networks with a scale-free structure, typical of regions where diffusion is sensitive to geographic and logistic influences (like for instance Alpine regions). We consider both the diffusion peak times of the total population and of the link classes. In the familiar trickle-down processes the adoption curve of the hubs is found to anticipate the total adoption in a predictable way. In a major departure from the standard model, we model a trickle-up process by introducing heterogeneous publicity coefficients (which can also be negative for the hubs, thus turning them into stiflers) and a stochastic term which represents the erratic generation of innovation at the periphery of the network. The results confirm the robustness of the Bass model and expand considerably its range of applicability.

  1. Innovation diffusion equations on correlated scale-free networks

    International Nuclear Information System (INIS)

    Bertotti, M.L.; Brunner, J.; Modanese, G.

    2016-01-01

    Highlights: • The Bass diffusion model can be formulated on scale-free networks. • In the trickle-down version, the hubs adopt earlier and act as monitors. • We improve the equations in order to describe trickle-up diffusion. • Innovation is generated at the network periphery, and hubs can act as stiflers. • We compare diffusion times, in dependence on the scale-free exponent. - Abstract: We introduce a heterogeneous network structure into the Bass diffusion model, in order to study the diffusion times of innovation or information in networks with a scale-free structure, typical of regions where diffusion is sensitive to geographic and logistic influences (like for instance Alpine regions). We consider both the diffusion peak times of the total population and of the link classes. In the familiar trickle-down processes the adoption curve of the hubs is found to anticipate the total adoption in a predictable way. In a major departure from the standard model, we model a trickle-up process by introducing heterogeneous publicity coefficients (which can also be negative for the hubs, thus turning them into stiflers) and a stochastic term which represents the erratic generation of innovation at the periphery of the network. The results confirm the robustness of the Bass model and expand considerably its range of applicability.

  2. Entanglement entropy in top-down models

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Peter A.R.; Taylor, Marika [Mathematical Sciences and STAG Research Centre, University of Southampton,Highfield, Southampton, SO17 1BJ (United Kingdom)

    2016-08-26

    We explore holographic entanglement entropy in ten-dimensional supergravity solutions. It has been proposed that entanglement entropy can be computed in such top-down models using minimal surfaces which asymptotically wrap the compact part of the geometry. We show explicitly in a wide range of examples that the holographic entanglement entropy thus computed agrees with the entanglement entropy computed using the Ryu-Takayanagi formula from the lower-dimensional Einstein metric obtained from reduction over the compact space. Our examples include not only consistent truncations but also cases in which no consistent truncation exists and Kaluza-Klein holography is used to identify the lower-dimensional Einstein metric. We then give a general proof, based on the Lewkowycz-Maldacena approach, of the top-down entanglement entropy formula.

  3. Entanglement entropy in top-down models

    International Nuclear Information System (INIS)

    Jones, Peter A.R.; Taylor, Marika

    2016-01-01

    We explore holographic entanglement entropy in ten-dimensional supergravity solutions. It has been proposed that entanglement entropy can be computed in such top-down models using minimal surfaces which asymptotically wrap the compact part of the geometry. We show explicitly in a wide range of examples that the holographic entanglement entropy thus computed agrees with the entanglement entropy computed using the Ryu-Takayanagi formula from the lower-dimensional Einstein metric obtained from reduction over the compact space. Our examples include not only consistent truncations but also cases in which no consistent truncation exists and Kaluza-Klein holography is used to identify the lower-dimensional Einstein metric. We then give a general proof, based on the Lewkowycz-Maldacena approach, of the top-down entanglement entropy formula.

  4. The Behavioral and Psychological Symptoms of Dementia in Down Syndrome (BPSD-DS) Scale: Comprehensive Assessment of Psychopathology in Down Syndrome

    Science.gov (United States)

    Dekker, Alain D.; Sacco, Silvia; Carfi, Angelo; Benejam, Bessy; Vermeiren, Yannick; Beugelsdijk, Gonny; Schippers, Mieke; Hassefras, Lyanne; Eleveld, José; Grefelman, Sharina; Fopma, Roelie; Bomer-Veenboer, Monique; Boti, Mariángeles; Oosterling, G. Danielle E.; Scholten, Esther; Tollenaere, Marleen; Checkley, Laura; Strydom, André; Van Goethem, Gert; Onder, Graziano; Blesa, Rafael; zu Eulenburg, Christine; Coppus, Antonia M.W.; Rebillat, Anne-Sophie; Fortea, Juan; De Deyn, Peter P.

    2018-01-01

    People with Down syndrome (DS) are prone to develop Alzheimer’s disease (AD). Behavioral and psychological symptoms of dementia (BPSD) are core features, but have not been comprehensively evaluated in DS. In a European multidisciplinary study, the novel Behavioral and Psychological Symptoms of Dementia in Down Syndrome (BPSD-DS) scale was developed to identify frequency and severity of behavioral changes taking account of life-long characteristic behavior. 83 behavioral items in 12 clinically defined sections were evaluated. The central aim was to identify items that change in relation to the dementia status, and thus may differentiate between diagnostic groups. Structured interviews were conducted with informants of persons with DS without dementia (DS, n = 149), with questionable dementia (DS+Q, n = 65), and with diagnosed dementia (DS+AD, n = 67). First exploratory data suggest promising interrater, test-retest, and internal consistency reliability measures. Concerning item relevance, group comparisons revealed pronounced increases in frequency and severity in items of anxiety, sleep disturbances, agitation & stereotypical behavior, aggression, apathy, depressive symptoms, and eating/drinking behavior. The proportion of individuals presenting an increase was highest in DS+AD, intermediate in DS+Q, and lowest in DS. Interestingly, among DS+Q individuals, a substantial proportion already presented increased anxiety, sleep disturbances, apathy, and depressive symptoms, suggesting that these changes occur early in the course of AD. Future efforts should optimize the scale based on current results and clinical experiences, and further study applicability, reliability, and validity. Future application of the scale in daily care may aid caregivers to understand changes, and contribute to timely interventions and adaptation of caregiving. PMID:29689719

  5. The Behavioral and Psychological Symptoms of Dementia in Down Syndrome (BPSD-DS) Scale: Comprehensive Assessment of Psychopathology in Down Syndrome.

    Science.gov (United States)

    Dekker, Alain D; Sacco, Silvia; Carfi, Angelo; Benejam, Bessy; Vermeiren, Yannick; Beugelsdijk, Gonny; Schippers, Mieke; Hassefras, Lyanne; Eleveld, José; Grefelman, Sharina; Fopma, Roelie; Bomer-Veenboer, Monique; Boti, Mariángeles; Oosterling, G Danielle E; Scholten, Esther; Tollenaere, Marleen; Checkley, Laura; Strydom, André; Van Goethem, Gert; Onder, Graziano; Blesa, Rafael; Zu Eulenburg, Christine; Coppus, Antonia M W; Rebillat, Anne-Sophie; Fortea, Juan; De Deyn, Peter P

    2018-01-01

    People with Down syndrome (DS) are prone to develop Alzheimer's disease (AD). Behavioral and psychological symptoms of dementia (BPSD) are core features, but have not been comprehensively evaluated in DS. In a European multidisciplinary study, the novel Behavioral and Psychological Symptoms of Dementia in Down Syndrome (BPSD-DS) scale was developed to identify frequency and severity of behavioral changes taking account of life-long characteristic behavior. 83 behavioral items in 12 clinically defined sections were evaluated. The central aim was to identify items that change in relation to the dementia status, and thus may differentiate between diagnostic groups. Structured interviews were conducted with informants of persons with DS without dementia (DS, n = 149), with questionable dementia (DS+Q, n = 65), and with diagnosed dementia (DS+AD, n = 67). First exploratory data suggest promising interrater, test-retest, and internal consistency reliability measures. Concerning item relevance, group comparisons revealed pronounced increases in frequency and severity in items of anxiety, sleep disturbances, agitation & stereotypical behavior, aggression, apathy, depressive symptoms, and eating/drinking behavior. The proportion of individuals presenting an increase was highest in DS+AD, intermediate in DS+Q, and lowest in DS. Interestingly, among DS+Q individuals, a substantial proportion already presented increased anxiety, sleep disturbances, apathy, and depressive symptoms, suggesting that these changes occur early in the course of AD. Future efforts should optimize the scale based on current results and clinical experiences, and further study applicability, reliability, and validity. Future application of the scale in daily care may aid caregivers to understand changes, and contribute to timely interventions and adaptation of caregiving.

  6. Matrix models, Argyres-Douglas singularities and double scaling limits

    International Nuclear Information System (INIS)

    Bertoldi, Gaetano

    2003-01-01

    We construct an N = 1 theory with gauge group U(nN) and degree n+1 tree level superpotential whose matrix model spectral curve develops an Argyres-Douglas singularity. The calculation of the tension of domain walls in the U(nN) theory shows that the standard large-N expansion breaks down at the Argyres-Douglas points, with tension that scales as a fractional power of N. Nevertheless, it is possible to define appropriate double scaling limits which are conjectured to yield the tension of 2-branes in the resulting N = 1 four dimensional non-critical string theories as proposed by Ferrari. (author)

  7. Bridging the Gap between the Nanometer-Scale Bottom-Up and Micrometer-Scale Top-Down Approaches for Site-Defined InP/InAs Nanowires.

    Science.gov (United States)

    Zhang, Guoqiang; Rainville, Christophe; Salmon, Adrian; Takiguchi, Masato; Tateno, Kouta; Gotoh, Hideki

    2015-11-24

    This work presents a method that bridges the gap between the nanometer-scale bottom-up and micrometer-scale top-down approaches for site-defined nanostructures, which has long been a significant challenge for applications that require low-cost and high-throughput manufacturing processes. We realized the bridging by controlling the seed indium nanoparticle position through a self-assembly process. Site-defined InP nanowires were then grown from the indium-nanoparticle array in the vapor-liquid-solid mode through a "seed and grow" process. The nanometer-scale indium particles do not always occupy the same locations within the micrometer-scale open window of an InP exposed substrate due to the scale difference. We developed a technique for aligning the nanometer-scale indium particles on the same side of the micrometer-scale window by structuring the surface of a misoriented InP (111)B substrate. Finally, we demonstrated that the developed method can be used to grow a uniform InP/InAs axial-heterostructure nanowire array. The ability to form a heterostructure nanowire array with this method makes it possible to tune the emission wavelength over a wide range by employing the quantum confinement effect and thus expand the application of this technology to optoelectronic devices. Successfully pairing a controllable bottom-up growth technique with a top-down substrate preparation technique greatly improves the potential for the mass-production and widespread adoption of this technology.

  8. Multi-scale Modeling of Arctic Clouds

    Science.gov (United States)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  9. A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media

    KAUST Repository

    Chen, Huangxin

    2016-06-01

    In this paper, we develop a two-scale reduced model for simulating the Darcy flow in two-dimensional porous media with conductive fractures. We apply the approach motivated by the embedded fracture model (EFM) to simulate the flow on the coarse scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved on unstructured grid which represents the fractures accurately, while in the EFM used on the coarse scale, the flux interaction between fractures and matrix are dealt with as a source term, and the matrix-fracture system can be resolved on structured grid. The Raviart-Thomas mixed finite element methods are used for the solution of the coupled flows in the matrix and the fractures on both fine and coarse scales. Numerical results are presented to demonstrate the efficiency of the proposed model for simulation of flow in fractured porous media.

  10. Representing uncertainty on model analysis plots

    Directory of Open Access Journals (Sweden)

    Trevor I. Smith

    2016-09-01

    Full Text Available Model analysis provides a mechanism for representing student learning as measured by standard multiple-choice surveys. The model plot contains information regarding both how likely students in a particular class are to choose the correct answer and how likely they are to choose an answer consistent with a well-documented conceptual model. Unfortunately, Bao’s original presentation of the model plot did not include a way to represent uncertainty in these measurements. I present details of a method to add error bars to model plots by expanding the work of Sommer and Lindell. I also provide a template for generating model plots with error bars.

  11. Experimental and in-silico investigation of population heterogeneity in continuous Sachharomyces cerevisiae scale-down fermentation in a novel two-compartment setup

    DEFF Research Database (Denmark)

    Heins, Anna-Lena; Lencastre Fernandes, Rita; Gernaey, Krist

    2015-01-01

    interconnected stirred tank reactors was used in combination with mathematical modeling, to mimic large-scale continuous cultivations. One reactor represents the feeding zone with high glucose concentration and low oxygen, whereas the other one represents the remaining reactor volume. An earlier developed...... population balance model coupled to an unstructured model was used to describe the development of bulk concentrations and cell size distributions at varying dilution rate, glucose feed concentration as well as recirculation times between the two compartments. The concentration profiles of biomass and glucose...

  12. Optimizing Electric Vehicle Coordination Over a Heterogeneous Mesh Network in a Scaled-Down Smart Grid Testbed

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Lévesque, Martin; Maier, Martin

    2015-01-01

    High penetration of renewable energy sources and electric vehicles (EVs) create power imbalance and congestion in the existing power network, and hence causes significant problems in the control and operation. Despite investing huge efforts from the electric utilities, governments, and researchers......, smart grid (SG) is still at the developmental stage to address those issues. In this regard, a smart grid testbed (SGT) is desirable to develop, analyze, and demonstrate various novel SG solutions, namely demand response, real-time pricing, and congestion management. In this paper, a novel SGT...... is developed in a laboratory by scaling a 250 kVA, 0.4 kV real low-voltage distribution feeder down to 1 kVA, 0.22 kV. Information and communication technology is integrated in the scaled-down network to establish real-time monitoring and control. The novelty of the developed testbed is demonstrated...

  13. Representing dispositions

    Directory of Open Access Journals (Sweden)

    Röhl Johannes

    2011-08-01

    Full Text Available Abstract Dispositions and tendencies feature significantly in the biomedical domain and therefore in representations of knowledge of that domain. They are not only important for specific applications like an infectious disease ontology, but also as part of a general strategy for modelling knowledge about molecular interactions. But the task of representing dispositions in some formal ontological systems is fraught with several problems, which are partly due to the fact that Description Logics can only deal well with binary relations. The paper will discuss some of the results of the philosophical debate about dispositions, in order to see whether the formal relations needed to represent dispositions can be broken down to binary relations. Finally, we will discuss problems arising from the possibility of the absence of realizations, of multi-track or multi-trigger dispositions and offer suggestions on how to deal with them.

  14. Regional and urban down scaling of global climate scenarios for health impact assessments

    Energy Technology Data Exchange (ETDEWEB)

    San Jose, R.; Perez, J. L.; Perez, L.; Gonzalez, R. M.; Pecci, J.; Garzon, A.; Palacios, M.

    2015-07-01

    In this contribution we have used global climate RCP IPCC scenarios to produce climate and air pollution maps at regional (25 km resolution) and urban scale with 200 m spatial resolution over Europe and five European cities in order to investigate the impact on meteorological variables and pollutant concentrations . We have used the very well known mesoscale meteorological model WRF-Chem (NOAA, US). We have used 2011 as control past year and two RCP scenarios from CCSM global climate model with 4.5 W/m2 and 8.5 W/m2 for 2030, 2050 and 2100 years. After running WRF-Chem model, using the boundary conditions provided by RCP scenarios with the emissions of 2011, we have performed a detailed down scaling process using CALMET diagnostic model to obtain a full 200 m spatial resolution map of five European cities (London, Antwerp, Madrid, Milan, and Helsinki). We will show the results and the health impacts for future RCP IPCC climate scenarios in comparison with the 2011 control year information for climate and health indicators. Finally, we have also investigated the impact of the aerosol effects in the short wave radiation mean value. Two simulations with the WRF-Chem model have been performed over Europe in 2010. A baseline simulation without any feedback effects and a second simulation including the direct effects affecting the solar radiation reaching the surface as well as the indirect aerosol effect with potential impacts on increasing or decreasing the precipitation rates. Aerosol effects produce an increase of incoming radiation over Atlantic Ocean (up to 70%) because the prescribed aerosol concentrations in the WRF-Chem without feedbacks is substantially higher than the aerosol concentrations produced when we activate the feedback effects. The decrease in solar radiation in the Sahara area (10%) is found to be produced because the prescribed aerosol concentration in the no feedback simulation is lower than when we activate the feedback effects. (Author)

  15. A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media

    KAUST Repository

    Chen, Huangxin; Sun, Shuyu

    2016-01-01

    scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved

  16. Landscape-scale soil moisture heterogeneity and its influence on surface fluxes at the Jornada LTER site: Evaluating a new model parameterization for subgrid-scale soil moisture variability

    Science.gov (United States)

    Baker, I. T.; Prihodko, L.; Vivoni, E. R.; Denning, A. S.

    2017-12-01

    Arid and semiarid regions represent a large fraction of global land, with attendant importance of surface energy and trace gas flux to global totals. These regions are characterized by strong seasonality, especially in precipitation, that defines the level of ecosystem stress. Individual plants have been observed to respond non-linearly to increasing soil moisture stress, where plant function is generally maintained as soils dry down to a threshold at which rapid closure of stomates occurs. Incorporating this nonlinear mechanism into landscape-scale models can result in unrealistic binary "on-off" behavior that is especially problematic in arid landscapes. Subsequently, models have `relaxed' their simulation of soil moisture stress on evapotranspiration (ET). Unfortunately, these relaxations are not physically based, but are imposed upon model physics as a means to force a more realistic response. Previously, we have introduced a new method to represent soil moisture regulation of ET, whereby the landscape is partitioned into `BINS' of soil moisture wetness, each associated with a fractional area of the landscape or grid cell. A physically- and observationally-based nonlinear soil moisture stress function is applied, but when convolved with the relative area distribution represented by wetness BINS the system has the emergent property of `smoothing' the landscape-scale response without the need for non-physical impositions on model physics. In this research we confront BINS simulations of Bowen ratio, soil moisture variability and trace gas flux with soil moisture and eddy covariance observations taken at the Jornada LTER dryland site in southern New Mexico. We calculate the mean annual wetting cycle and associated variability about the mean state and evaluate model performance against this variability and time series of land surface fluxes from the highly instrumented Tromble Weir watershed. The BINS simulations capture the relatively rapid reaction to wetting

  17. Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases.

    Science.gov (United States)

    Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N

    2016-01-01

    A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed.

  18. Modeling the effects of perceptual load: saliency, competitive interactions, and top-down biases.

    Directory of Open Access Journals (Sweden)

    Kleanthis eNeokleous

    2016-01-01

    Full Text Available A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the Perceptual Load Theory as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed.

  19. Scaling down of a clinical three-dimensional perfusion multicompartment hollow fiber liver bioreactor developed for extracorporeal liver support to an analytical scale device useful for hepatic pharmacological in vitro studies.

    Science.gov (United States)

    Zeilinger, Katrin; Schreiter, Thomas; Darnell, Malin; Söderdahl, Therese; Lübberstedt, Marc; Dillner, Birgitta; Knobeloch, Daniel; Nüssler, Andreas K; Gerlach, Jörg C; Andersson, Tommy B

    2011-05-01

    Within the scope of developing an in vitro culture model for pharmacological research on human liver functions, a three-dimensional multicompartment hollow fiber bioreactor proven to function as a clinical extracorporeal liver support system was scaled down in two steps from 800 mL to 8 mL and 2 mL bioreactors. Primary human liver cells cultured over 14 days in 800, 8, or 2 mL bioreactors exhibited comparable time-course profiles for most of the metabolic parameters in the different bioreactor size variants. Major drug-metabolizing cytochrome P450 activities analyzed in the 2 mL bioreactor were preserved over up to 23 days. Immunohistochemical studies revealed tissue-like structures of parenchymal and nonparenchymal cells in the miniaturized bioreactor, indicating physiological reorganization of the cells. Moreover, the canalicular transporters multidrug-resistance-associated protein 2, multidrug-resistance protein 1 (P-glycoprotein), and breast cancer resistance protein showed a similar distribution pattern to that found in human liver tissue. In conclusion, the down-scaled multicompartment hollow fiber technology allows stable maintenance of primary human liver cells and provides an innovative tool for pharmacological and kinetic studies of hepatic functions with small cell numbers.

  20. Sodium-cutting: a new top-down approach to cut open nanostructures on nonplanar surfaces on a large scale.

    Science.gov (United States)

    Chen, Wei; Deng, Da

    2014-11-11

    We report a new, low-cost and simple top-down approach, "sodium-cutting", to cut and open nanostructures deposited on a nonplanar surface on a large scale. The feasibility of sodium-cutting was demonstrated with the successfully cutting open of ∼100% carbon nanospheres into nanobowls on a large scale from Sn@C nanospheres for the first time.

  1. Validity of thermally-driven small-scale ventilated filling box models

    Science.gov (United States)

    Partridge, Jamie L.; Linden, P. F.

    2013-11-01

    The majority of previous work studying building ventilation flows at laboratory scale have used saline plumes in water. The production of buoyancy forces using salinity variations in water allows dynamic similarity between the small-scale models and the full-scale flows. However, in some situations, such as including the effects of non-adiabatic boundaries, the use of a thermal plume is desirable. The efficacy of using temperature differences to produce buoyancy-driven flows representing natural ventilation of a building in a small-scale model is examined here, with comparison between previous theoretical and new, heat-based, experiments.

  2. Modeling of flow conditions in down draft gasifiers using tin film models

    DEFF Research Database (Denmark)

    Jensen, Torben Kvist; Gøbel, Benny; Henriksen, Ulrik Birk

    2003-01-01

    In order to examine how an inhomogeneous char bed affects the gas flow through the bed, a dynamic model have been developed to describe the flow distribution in a down draft gasifier. The gas flow distribution through the bed was determined using a thin film model approach. The temperatures...

  3. Harnessing Big Data to Represent 30-meter Spatial Heterogeneity in Earth System Models

    Science.gov (United States)

    Chaney, N.; Shevliakova, E.; Malyshev, S.; Van Huijgevoort, M.; Milly, C.; Sulman, B. N.

    2016-12-01

    Terrestrial land surface processes play a critical role in the Earth system; they have a profound impact on the global climate, food and energy production, freshwater resources, and biodiversity. One of the most fascinating yet challenging aspects of characterizing terrestrial ecosystems is their field-scale (˜30 m) spatial heterogeneity. It has been observed repeatedly that the water, energy, and biogeochemical cycles at multiple temporal and spatial scales have deep ties to an ecosystem's spatial structure. Current Earth system models largely disregard this important relationship leading to an inadequate representation of ecosystem dynamics. In this presentation, we will show how existing global environmental datasets can be harnessed to explicitly represent field-scale spatial heterogeneity in Earth system models. For each macroscale grid cell, these environmental data are clustered according to their field-scale soil and topographic attributes to define unique sub-grid tiles. The state-of-the-art Geophysical Fluid Dynamics Laboratory (GFDL) land model is then used to simulate these tiles and their spatial interactions via the exchange of water, energy, and nutrients along explicit topographic gradients. Using historical simulations over the contiguous United States, we will show how a robust representation of field-scale spatial heterogeneity impacts modeled ecosystem dynamics including the water, energy, and biogeochemical cycles as well as vegetation composition and distribution.

  4. Low-energy consequences of superstring-inspired models with intermediate-mass scales

    International Nuclear Information System (INIS)

    Gabbiani, F.

    1987-01-01

    The phenomenological consequences of implementing intermediate-mass scales in E 6 superstring-inspired models are discussed. Starting from a suitable Calabi-Yau compactification with b 1,1 >1, one gets, after Hosotani breaking, the rank r=5 gauge group SU(3) C x SU(2) L x U(1) Y x U(1) E , that is broken at an intermediate-mass scale down to the standard-model group. The analysis of both the intermediate and the electroweak breaking is performed in the two cases Λ c = M x and Λ c x , where Λ c is the scale at which the hidden sector gauginos condensate. It is performed quantitatively the minimization of the low-energy effective potential and the renormalization group analysis, yielding a viable set of mass spectra and confirming the reliability of the intermediate-breaking scheme

  5. Use of genome-scale microbial models for metabolic engineering

    DEFF Research Database (Denmark)

    Patil, Kiran Raosaheb; Åkesson, M.; Nielsen, Jens

    2004-01-01

    Metabolic engineering serves as an integrated approach to design new cell factories by providing rational design procedures and valuable mathematical and experimental tools. Mathematical models have an important role for phenotypic analysis, but can also be used for the design of optimal metaboli...... network structures. The major challenge for metabolic engineering in the post-genomic era is to broaden its design methodologies to incorporate genome-scale biological data. Genome-scale stoichiometric models of microorganisms represent a first step in this direction....

  6. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  7. Site-scale groundwater flow modelling of Aberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1998-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method`s self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the

  8. Site-scale groundwater flow modelling of Aberg

    International Nuclear Information System (INIS)

    Walker, D.; Gylling, B.

    1998-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method's self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the

  9. Down-scaling wind energy resource from mesoscale to local scale by nesting and data assimilation with a CFD model

    International Nuclear Information System (INIS)

    Duraisamy Jothiprakasam, Venkatesh

    2014-01-01

    The development of wind energy generation requires precise and well-established methods for wind resource assessment, which is the initial step in every wind farm project. During the last two decades linear flow models were widely used in the wind industry for wind resource assessment and micro-siting. But the linear models inaccuracies in predicting the wind speeds in very complex terrain are well known and led to use of CFD, capable of modeling the complex flow in details around specific geographic features. Mesoscale models (NWP) are able to predict the wind regime at resolutions of several kilometers, but are not well suited to resolve the wind speed and turbulence induced by the topography features on the scale of a few hundred meters. CFD has proven successful in capturing flow details at smaller scales, but needs an accurate specification of the inlet conditions. Thus coupling NWP and CFD models is a better modeling approach for wind energy applications. A one-year field measurement campaign carried out in a complex terrain in southern France during 2007-2008 provides a well-documented data set both for input and validation data. The proposed new methodology aims to address two problems: the high spatial variation of the topography on the domain lateral boundaries, and the prediction errors of the mesoscale model. It is applied in this work using the open source CFD code Code-Saturne, coupled with the mesoscale forecast model of Meteo-France (ALADIN). The improvement is obtained by combining the mesoscale data as inlet condition and field measurement data assimilation into the CFD model. Newtonian relaxation (nudging) data assimilation technique is used to incorporate the measurement data into the CFD simulations. The methodology to reconstruct long term averages uses a clustering process to group the similar meteorological conditions and to reduce the number of CFD simulations needed to reproduce 1 year of atmospheric flow over the site. The assimilation

  10. Multi-scale modeling of inter-granular fracture in UO2

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tonks, Michael R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, S. Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    A hierarchical multi-scale approach is pursued in this work to investigate the influence of porosity, pore and grain size on the intergranular brittle fracture in UO2. In this approach, molecular dynamics simulations are performed to obtain the fracture properties for different grain boundary types. A phase-field model is then utilized to perform intergranular fracture simulations of representative microstructures with different porosities, pore and grain sizes. In these simulations the grain boundary fracture properties obtained from molecular dynamics simulations are used. The responses from the phase-field fracture simulations are then fitted with a stress-based brittle fracture model usable at the engineering scale. This approach encapsulates three different length and time scales, and allows the development of microstructurally informed engineering scale model from properties evaluated at the atomistic scale.

  11. Applying Hillslope Hydrology to Bridge between Ecosystem and Grid-Scale Processes within an Earth System Model

    Science.gov (United States)

    Subin, Z. M.; Sulman, B. N.; Malyshev, S.; Shevliakova, E.

    2013-12-01

    Soil moisture is a crucial control on surface energy fluxes, vegetation properties, and soil carbon cycling. Its interactions with ecosystem processes are highly nonlinear across a large range, as both drought stress and anoxia can impede vegetation and microbial growth. Earth System Models (ESMs) generally only represent an average soil-moisture state in grid cells at scales of 50-200 km, and as a result are not able to adequately represent the effects of subgrid heterogeneity in soil moisture, especially in regions with large wetland areas. We addressed this deficiency by developing the first ESM-coupled subgrid hillslope-hydrological model, TiHy (Tiled-hillslope Hydrology), embedded within the Geophysical Fluid Dynamics Laboratory (GFDL) land model. In each grid cell, one or more representative hillslope geometries are discretized into land model tiles along an upland-to-lowland gradient. These geometries represent ~1 km hillslope-scale hydrological features and allow for flexible representation of hillslope profile and plan shapes, in addition to variation of subsurface properties among or within hillslopes. Each tile (which may represent ~100 m along the hillslope) has its own surface fluxes, vegetation state, and vertically-resolved state variables for soil physics and biogeochemistry. Resolution of water state in deep layers (~200 m) down to bedrock allows for physical integration of groundwater transport with unsaturated overlying dynamics. Multiple tiles can also co-exist at the same vertical position along the hillslope, allowing the simulation of ecosystem heterogeneity due to disturbance. The hydrological model is coupled to the vertically-resolved Carbon, Organisms, Respiration, and Protection in the Soil Environment (CORPSE) model, which captures non-linearity resulting from interactions between vertically-heterogeneous soil carbon and water profiles. We present comparisons of simulated water table depth to observations. We examine sensitivities to

  12. Sterol synthesis and cell size distribution under oscillatory growth conditions in Saccharomyces cerevisiae scale-down cultivations.

    Science.gov (United States)

    Marbà-Ardébol, Anna-Maria; Bockisch, Anika; Neubauer, Peter; Junne, Stefan

    2018-02-01

    Physiological responses of yeast to oscillatory environments as they appear in the liquid phase in large-scale bioreactors have been the subject of past studies. So far, however, the impact on the sterol content and intracellular regulation remains to be investigated. Since oxygen is a cofactor in several reaction steps within sterol metabolism, changes in oxygen availability, as occurs in production-scale aerated bioreactors, might have an influence on the regulation and incorporation of free sterols into the cell lipid layer. Therefore, sterol and fatty acid synthesis in two- and three-compartment scale-down Saccharomyces cerevisiae cultivation were studied and compared with typical values obtained in homogeneous lab-scale cultivations. While cells were exposed to oscillating substrate and oxygen availability in the scale-down cultivations, growth was reduced and accumulation of carboxylic acids was increased. Sterol synthesis was elevated to ergosterol at the same time. The higher fluxes led to increased concentrations of esterified sterols. The cells thus seem to utilize the increased availability of precursors to fill their sterol reservoirs; however, this seems to be limited in the three-compartment reactor cultivation due to a prolonged exposure to oxygen limitation. Besides, a larger heterogeneity within the single-cell size distribution was observed under oscillatory growth conditions with three-dimensional holographic microscopy. Hence the impact of gradients is also observable at the morphological level. The consideration of such a single-cell-based analysis provides useful information about the homogeneity of responses among the population. Copyright © 2017 John Wiley & Sons, Ltd.

  13. A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation

    Science.gov (United States)

    Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.

    2016-12-01

    Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.

  14. Developing an Informant Questionnaire for Cognitive Abilities in Down Syndrome: The Cognitive Scale for Down Syndrome (CS-DS.

    Directory of Open Access Journals (Sweden)

    Carla M Startin

    Full Text Available Down syndrome (DS is the most common genetic cause of intellectual disability (ID. Abilities relating to executive function, memory and language are particularly affected in DS, although there is a large variability across individuals. People with DS also show an increased risk of developing dementia. While assessment batteries have been developed for adults with DS to assess cognitive abilities, these batteries may not be suitable for those with more severe IDs, dementia, or visual / hearing difficulties. Here we report the development of an informant rated questionnaire, the Cognitive Scale for Down Syndrome (CS-DS, which focuses on everyday abilities relating to executive function, memory and language, and is suitable for assessing these abilities in all adults with DS regardless of cognitive ability. Complete questionnaires were collected about 128 individuals with DS. After final question selection we found high internal consistency scores across the total questionnaire and within the executive function, memory and language domains. CS-DS scores showed a wide range, with minimal floor and ceiling effects. We found high interrater (n = 55 and test retest (n = 36 intraclass correlations. CS-DS scores were significantly lower in those aged 41+ with significant cognitive decline compared to those without decline. Across all adults without cognitive decline, CS-DS scores correlated significantly to measures of general abilities. Exploratory factor analysis suggested five factors within the scale, relating to memory, self-regulation / inhibition, self-direction / initiation, communication, and focussing attention. The CS-DS therefore shows good interrater and test retest reliability, and appears to be a valid and suitable informant rating tool for assessing everyday cognitive abilities in a wide range of individuals with DS. Such a questionnaire may be a useful outcome measure for intervention studies to assess improvements to cognition, in

  15. Hydrodynamics of a natural circulation loop in a scaled-down steam drum-riser-downcomer assembly

    Energy Technology Data Exchange (ETDEWEB)

    Basu, Dipankar N., E-mail: dnbasu@iitg.ernet.in; Patil, N.D.; Bhattacharyya, Souvik; Das, P.K.

    2013-12-15

    Highlights: • Experimental investigation of loop hydrodynamics in a scaled-down simulated AHWR. • Identification of flow regimes and transition analyzing conductance probe signal. • Downcomer flow maximizes with fully developed churn flow and lowest for bubbly flow. • Highest downcomer flow rate is achieved with identical air supply to both risers. • Interaction of varying flow patterns reduces downcomer flow for unequal operation. - Abstract: Complex interactions of different phases, widely varying frictional characteristics of different flow regimes and the involvement of multiple scales of transport make the modelling of a two-phase natural circulation loop (NCL) exceedingly difficult. The knowledge base about the dependency of downcomer flow rate on riser-side flow patterns, particularly for systems with multiple parallel channels is barely developed, necessitating the need for detailed experimentation. The present study focuses on developing a scaled-down test facility relevant to the Advanced Heavy Water Reactor conceived in the atomic energy programme of India to study the hydrodynamics of the NCL using air and water as test fluids. An experimental facility with two risers, one downcomer and a phase-separating drum was fabricated. Conductivity probes and photographic techniques are used to characterize the two phase flow. Normalized voltage signals obtained from the amplified output of conductivity probes and their subsequent analysis through probability distribution function reveal the presence of different two-phase flow patterns in the riser tubes. With the increase in air supply per riser void fraction in the two-phase mixture increases and gradually flow patterns transform from bubbly to fully developed annular through slug, churn and dispersed annular flow regimes. Downcomer flow rate increases rapidly with air supply till a maximum and then starts decreasing due to enhanced frictional forces. However, the maximum value of downcomer water

  16. Modeling Lactococcus lactis using a genome-scale flux model

    Directory of Open Access Journals (Sweden)

    Nielsen Jens

    2005-06-01

    Full Text Available Abstract Background Genome-scale flux models are useful tools to represent and analyze microbial metabolism. In this work we reconstructed the metabolic network of the lactic acid bacteria Lactococcus lactis and developed a genome-scale flux model able to simulate and analyze network capabilities and whole-cell function under aerobic and anaerobic continuous cultures. Flux balance analysis (FBA and minimization of metabolic adjustment (MOMA were used as modeling frameworks. Results The metabolic network was reconstructed using the annotated genome sequence from L. lactis ssp. lactis IL1403 together with physiological and biochemical information. The established network comprised a total of 621 reactions and 509 metabolites, representing the overall metabolism of L. lactis. Experimental data reported in the literature was used to fit the model to phenotypic observations. Regulatory constraints had to be included to simulate certain metabolic features, such as the shift from homo to heterolactic fermentation. A minimal medium for in silico growth was identified, indicating the requirement of four amino acids in addition to a sugar. Remarkably, de novo biosynthesis of four other amino acids was observed even when all amino acids were supplied, which is in good agreement with experimental observations. Additionally, enhanced metabolic engineering strategies for improved diacetyl producing strains were designed. Conclusion The L. lactis metabolic network can now be used for a better understanding of lactococcal metabolic capabilities and potential, for the design of enhanced metabolic engineering strategies and for integration with other types of 'omic' data, to assist in finding new information on cellular organization and function.

  17. Site-scale groundwater flow modelling of Beberg

    International Nuclear Information System (INIS)

    Gylling, B.; Walker, D.; Hartley, L.

    1999-08-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of ε f 10 -4 , and a flow-wetted surface of a r = 1.0 m 2 /(m 3 rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10 -3 m/year. The median F-ratio is 5.6 x 10 5 year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates that the change in hydraulic gradient

  18. Top-Down Enterprise Application Integration with Reference Models

    Directory of Open Access Journals (Sweden)

    Willem-Jan van den Heuvel

    2000-11-01

    Full Text Available For Enterprise Resource Planning (ERP systems such as SAP R/3 or IBM SanFrancisco, the tailoring of reference models for customizing the ERP systems to specific organizational contexts is an established approach. In this paper, we present a methodology that uses such reference models as a starting point for a top-down integration of enterprise applications. The re-engineered models of legacy systems are individually linked via cross-mapping specifications to the forward-engineered reference model's specification. The actual linking of reference and legacy models is done with a methodology for connecting (new business objects with (old legacy systems.

  19. TeV scale leptoquarks as a signature of standard-like superstring models

    International Nuclear Information System (INIS)

    Halyo, E.

    1993-12-01

    We show that there can be TeV scale scalar and fermionic leptoquarks with very weak Yukawa couplings in a generic standard-like superstring model. Leptoquark (down-like) quark mixing though present, is not large enough to violate the unitary bounds on the CKM matrix. The constraints on the leptoquark masses and couplings from flavor changing neutral currents are easily satisfied whereas those from baryon number violation may cause problems. The leptoquarks of the model are compared to the ones in the E 6 Calabi-Yau models. (author) 14 refs

  20. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  1. One-fifth-scale and full-scale fuel element rocking tests

    International Nuclear Information System (INIS)

    Nau, P.V.; Olsen, B.E.

    1978-06-01

    Using 1 / 5 -scale and 1 / 1 -scale (prototype H451) fuel elements, one, two, or three stacked elements on a clamped base element were rocked from an initial release position. Relative displacement, rock-down loads, and dowel pin shear forces were measured. A scaled comparison between 1 / 5 -scale and 1 / 1 -scale results was made to evaluate the model scaling laws, and an error analysis was performed to assess the accuracy and usefulness of the test data

  2. A model for allometric scaling of mammalian metabolism with ambient heat loss

    KAUST Repository

    Kwak, Ho Sang

    2016-02-02

    Background Allometric scaling, which represents the dependence of biological trait or process relates on body size, is a long-standing subject in biological science. However, there has been no study to consider heat loss to the ambient and an insulation layer representing mammalian skin and fur for the derivation of the scaling law of metabolism. Methods A simple heat transfer model is proposed to analyze the allometry of mammalian metabolism. The present model extends existing studies by incorporating various external heat transfer parameters and additional insulation layers. The model equations were solved numerically and by an analytic heat balance approach. Results A general observation is that the present heat transfer model predicted the 2/3 surface scaling law, which is primarily attributed to the dependence of the surface area on the body mass. External heat transfer effects introduced deviations in the scaling law, mainly due to natural convection heat transfer which becomes more prominent at smaller mass. These deviations resulted in a slight modification of the scaling exponent to a value smaller than 2/3. Conclusion The finding that additional radiative heat loss and the consideration of an outer insulation fur layer attenuate these deviation effects and render the scaling law closer to 2/3 provides in silico evidence for a functional impact of heat transfer mode on the allometric scaling law in mammalian metabolism.

  3. Representing Uncertainty on Model Analysis Plots

    Science.gov (United States)

    Smith, Trevor I.

    2016-01-01

    Model analysis provides a mechanism for representing student learning as measured by standard multiple-choice surveys. The model plot contains information regarding both how likely students in a particular class are to choose the correct answer and how likely they are to choose an answer consistent with a well-documented conceptual model.…

  4. Using a down-scaled bioclimate envelope model to determine long-term temporal connectivity of Garry oak (Quercus garryana) habitat in western North America: implications for protected area planning.

    Science.gov (United States)

    Pellatt, Marlow G; Goring, Simon J; Bodtker, Karin M; Cannon, Alex J

    2012-04-01

    Under the Canadian Species at Risk Act (SARA), Garry oak (Quercus garryana) ecosystems are listed as "at-risk" and act as an umbrella for over one hundred species that are endangered to some degree. Understanding Garry oak responses to future climate scenarios at scales relevant to protected area managers is essential to effectively manage existing protected area networks and to guide the selection of temporally connected migration corridors, additional protected areas, and to maintain Garry oak populations over the next century. We present Garry oak distribution scenarios using two random forest models calibrated with down-scaled bioclimatic data for British Columbia, Washington, and Oregon based on 1961-1990 climate normals. The suitability models are calibrated using either both precipitation and temperature variables or using only temperature variables. We compare suitability predictions from four General Circulation Models (GCMs) and present CGCM2 model results under two emissions scenarios. For each GCM and emissions scenario we apply the two Garry oak suitability models and use the suitability models to determine the extent and temporal connectivity of climatically suitable Garry oak habitat within protected areas from 2010 to 2099. The suitability models indicate that while 164 km(2) of the total protected area network in the region (47,990 km(2)) contains recorded Garry oak presence, 1635 and 1680 km(2) of climatically suitable Garry oak habitat is currently under some form of protection. Of this suitable protected area, only between 6.6 and 7.3% will be "temporally connected" between 2010 and 2099 based on the CGCM2 model. These results highlight the need for public and private protected area organizations to work cooperatively in the development of corridors to maintain temporal connectivity in climatically suitable areas for the future of Garry oak ecosystems.

  5. Representing Reservoir Stratification in Land Surface and Earth System Models

    Science.gov (United States)

    Yigzaw, W.; Li, H. Y.; Leung, L. R.; Hejazi, M. I.; Voisin, N.; Payn, R. A.; Demissie, Y.

    2017-12-01

    A one-dimensional reservoir stratification modeling has been developed as part of Model for Scale Adaptive River Transport (MOSART), which is the river transport model used in the Accelerated Climate Modeling for Energy (ACME) and Community Earth System Model (CESM). Reservoirs play an important role in modulating the dynamic water, energy and biogeochemical cycles in the riverine system through nutrient sequestration and stratification. However, most earth system models include lake models that assume a simplified geometry featuring a constant depth and a constant surface area. As reservoir geometry has important effects on thermal stratification, we developed a new algorithm for deriving generic, stratified area-elevation-storage relationships that are applicable at regional and global scales using data from Global Reservoir and Dam database (GRanD). This new reservoir geometry dataset is then used to support the development of a reservoir stratification module within MOSART. The mixing of layers (energy and mass) in the reservoir is driven by eddy diffusion, vertical advection, and reservoir inflow and outflow. Upstream inflow into a reservoir is treated as an additional source/sink of energy, while downstream outflow represented a sink. Hourly atmospheric forcing from North American Land Assimilation System (NLDAS) Phase II and simulated daily runoff by ACME land component are used as inputs for the model over the contiguous United States for simulations between 2001-2010. The model is validated using selected observed temperature profile data in a number of reservoirs that are subject to various levels of regulation. The reservoir stratification module completes the representation of riverine mass and heat transfer in earth system models, which is a major step towards quantitative understanding of human influences on the terrestrial hydrological, ecological and biogeochemical cycles.

  6. La percepción social hacia las personas con síndrome de Down: la escala EPSD-1 The social perception of people with Down syndrome: the EPSD-1 scale

    Directory of Open Access Journals (Sweden)

    Jesús Molina Saorín

    2012-12-01

    Full Text Available En esta investigación, financiada por varias entidades, hemos diseñado un instrumento que permite conocer la percepción que tienen los estudiantes universitarios de Educación Física sobre las personas con síndrome de Down. Al instrumento lo hemos denominado escala de Pecepción Social hacia las personas con síndrome de Down EPSD-1, y en su diseño se recogen importantes variables psicosociales, al tiempo que se busca dar cuenta de sus propiedades para aplicarla a otros contextos. Tras un análisis factorial, ofrecemos diez grandes factores que abordan temas como la exclusión social de las personas con síndrome de Down, su autonomía e independencia, sus relaciones afectivo-sexuales, su aceptación social y educativa, su integración, la actitud familiar, la formación docente y el proteccionismo social hacia estas personas. La muestra inicial la componen 1.796 participantes y los resultados indican que se trata de un instrumento fiable y válido para su aplicación. Esta escala resulta de gran utilidad en el ámbito de las Ciencias Sociales, ofreciendo una relación latente con respecto a la formación inicial que, desde la universidad, se ofrece a los estudiantes universitarios. Mediante una metodología cuantitativa, los resultados muestran que sus propiedades psicométricas son altamente satisfactorias, motivo por el cual sugerimos realizar nuevos estudios longitudinales y transversales a partir de su uso aplicado a diferentes poblaciones, con objeto de profundizar sobre la percepción social hacia las personas con síndrome de Down, aportando datos sobre la tendencia y evolución al respecto en los últimos años.In this research, which was financed by several institutions, we have designed an instrument that allows us to know the perception that university students of Physical Education have of people with Down syndrome. We have called this instrument the scale of Social Perception of people with Down syndrome EPSD-1, and in its

  7. Top-Down Influences on Local Networks: Basic Theory with Experimental Implications

    Directory of Open Access Journals (Sweden)

    Ramesh eSrinivasan

    2013-04-01

    Full Text Available The response of a population of sensory neurons to an external stimulus depends not only on the receptive field properties of the neurons, but also the level of arousal and attention or goal-oriented cognitive biases that guide information processing. These top-down effects on the sensory neurons bias the output of the neurons and affect behavioral outcomes such as stimulus detection, discrimination, and response time. In any physiological study, neural dynamics are observed in a specific brain state; the background state partly determines neuronal excitability. Experimental studies in humans and animal models have also demonstrated that slow oscillations (typically in the alpha or theta bands modulate the fast oscillations (gamma band associated with local networks of neurons. Cross-frequency interaction is of interest as a mechanism for top-down or bottom-up interactions between systems at different spatial scales. We develop a generic model of top-down influences on local networks appropriate for comparison with EEG. EEG provides excellent temporal resolution to investigate neuronal oscillations but is space-averaged on the cm scale. Thus, appropriate EEG models are developed in terms of population synaptic activity. We used the Wilson-Cowan population model to investigate fast (gamma band oscillations generated by a local network of excitatory and inhibitory neurons. We modified the Wilson-Cowan equations to make them more physiologically realistic by explicitly incorporating background state variables into the model. We found that the population response is strongly influenced by the background state. We apply the model to reproduce the modulation of gamma rhythms by theta rhythms as has been observed in animal models and human ECoG and EEG studies. The concept of a dynamic background state presented here using the Wilson-Cowan model can be readily applied to incorporate top-down modulation in more detailed models of specific sensory

  8. Towards an integrated model of floodplain hydrology representing feedbacks and anthropogenic effects

    Science.gov (United States)

    Andreadis, K.; Schumann, G.; Voisin, N.; O'Loughlin, F.; Tesfa, T. K.; Bates, P.

    2017-12-01

    The exchange of water between hillslopes, river channels and floodplain can be quite complex and the difficulty in capturing the mechanisms behind it is exacerbated by the impact of human activities such as irrigation and reservoir operations. Although there has been a vast body of work on modeling hydrological processes, most of the resulting models have been limited with regards to aspects of the coupled human-natural system. For example, hydrologic models that represent processes such as evapotranspiration, infiltration, interception and groundwater dynamics often neglect anthropogenic effects or do not adequately represent the inherently two-dimensional floodplain flow. We present an integrated modeling framework that is comprised of the Variable Infiltration Capacity (VIC) hydrology model, the LISFLOOD-FP hydrodynamic model, and the Water resources Management (WM) model. The VIC model solves the energy and water balance over a gridded domain and simulates a number of hydrologic features such as snow, frozen soils, lakes and wetlands, while also representing irrigation demand from cropland areas. LISFLOOD-FP solves an approximation of the Saint-Venant equations to efficiently simulate flow in river channels and the floodplain. The implementation of WM accommodates a variety of operating rules in reservoirs and withdrawals due to consumptive demands, allowing the successful simulation of regulated flow. The models are coupled so as to allow feedbacks between their corresponding processes, therefore providing the ability to test different hypotheses about the floodplain hydrology of large-scale basins. We test this integrated framework over the Zambezi River basin by simulating its hydrology from 2000-2010, and evaluate the results against remotely sensed observations. Finally, we examine the sensitivity of streamflow and water inundation to changes in reservoir operations, precipitation and temperature.

  9. Site-scale groundwater flow modelling of Beberg

    Energy Technology Data Exchange (ETDEWEB)

    Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden); Walker, D. [Duke Engineering and Services (United States); Hartley, L. [AEA Technology, Harwell (United Kingdom)

    1999-08-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of {epsilon}{sub f} 10{sup -4}, and a flow-wetted surface of a{sub r} = 1.0 m{sup 2}/(m{sup 3} rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10{sup -3} m/year. The median F-ratio is 5.6 x 10{sup 5} year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates

  10. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  11. A SUB-GRID VOLUME-OF-FLUIDS (VOF) MODEL FOR MIXING IN RESOLVED SCALE AND IN UNRESOLVED SCALE COMPUTATIONS

    International Nuclear Information System (INIS)

    Vold, Erik L.; Scannapieco, Tony J.

    2007-01-01

    A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.

  12. Enhanced learning through scale models and see-thru visualization

    International Nuclear Information System (INIS)

    Kelley, M.D.

    1987-01-01

    The development of PowerSafety International's See-Thru Power Plant has provided the nuclear industry with a bridge that can span the gap between the part-task simulator and the full-scope, high-fidelity plant simulator. The principle behind the See-Thru Power Plant is to provide the use of sensory experience in nuclear training programs. The See-Thru Power Plant is a scaled down, fully functioning model of a commercial nuclear power plant, equipped with a primary system, secondary system, and control console. The major components are constructed of glass, thus permitting visual conceptualization of a working nuclear power plant

  13. Development of in-situ product removal strategies in biocatalysis applying scaled-down unit operations

    DEFF Research Database (Denmark)

    Heintz, Søren; Börner, Tim; Ringborg, Rolf Hoffmeyer

    2017-01-01

    different process steps while operating it as a combined system, giving the possibility to test and characterize the performance of novel process concepts and biocatalysts with minimal influence of inhibitory products. Here the capabilities of performing process development by applying scaled-down unit...... operations are highlighted through a case study investigating the asymmetric synthesis of 1-methyl-3-phenylpropylamine (MPPA) using ω-transaminase, an enzyme in the sub-family of amino transferases (ATAs). An on-line HPLC system was applied to avoid manual sample handling and to semi...

  14. MEGAPOLI: concept of multi-scale modelling of megacity impact on air quality and climate

    Science.gov (United States)

    Baklanov, A.; Lawrence, M.; Pandis, S.; Mahura, A.; Finardi, S.; Moussiopoulos, N.; Beekmann, M.; Laj, P.; Gomes, L.; Jaffrezo, J.-L.; Borbon, A.; Coll, I.; Gros, V.; Sciare, J.; Kukkonen, J.; Galmarini, S.; Giorgi, F.; Grimmond, S.; Esau, I.; Stohl, A.; Denby, B.; Wagner, T.; Butler, T.; Baltensperger, U.; Builtjes, P.; van den Hout, D.; van der Gon, H. D.; Collins, B.; Schluenzen, H.; Kulmala, M.; Zilitinkevich, S.; Sokhi, R.; Friedrich, R.; Theloke, J.; Kummer, U.; Jalkinen, L.; Halenka, T.; Wiedensholer, A.; Pyle, J.; Rossow, W. B.

    2010-11-01

    The EU FP7 Project MEGAPOLI: "Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation" (http://megapoli.info) brings together leading European research groups, state-of-the-art scientific tools and key players from non-European countries to investigate the interactions among megacities, air quality and climate. MEGAPOLI bridges the spatial and temporal scales that connect local emissions, air quality and weather with global atmospheric chemistry and climate. The suggested concept of multi-scale integrated modelling of megacity impact on air quality and climate and vice versa is discussed in the paper. It requires considering different spatial and temporal dimensions: time scales from seconds and hours (to understand the interaction mechanisms) up to years and decades (to consider the climate effects); spatial resolutions: with model down- and up-scaling from street- to global-scale; and two-way interactions between meteorological and chemical processes.

  15. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  16. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  17. Analysis of Lightning-induced Impulse Magnetic Fields in the Building with an Insulated Down Conductor

    Science.gov (United States)

    Du, Patrick Y.; Zhou, Qi-Bin

    This paper presents an analysis of lightning-induced magnetic fields in a building. The building of concern is protected by the lightning protection system with an insulated down conductor. In this paper a system model for metallic structure of the building is constructed first using the circuit approach. The circuit model of the insulated down conductor is discussed extensively, and explicit expressions of the circuit parameters are presented. The system model was verified experimentally in the laboratory. The modeling approach is applied to analyze the impulse magnetic fields in a full-scale building during a direct lightning strike. It is found that the impulse magnetic field is significantly high near the down conductor. The field is attenuated if the down conductor is moved to a column in the building. The field can be reduced further if the down conductor is housed in an earthed metal pipe. Recommendations for protecting critical equipment against lightning-induced magnetic fields are also provided in the paper.

  18. High-resolution Continental Scale Land Surface Model incorporating Land-water Management in United States

    Science.gov (United States)

    Shin, S.; Pokhrel, Y. N.

    2016-12-01

    Land surface models have been used to assess water resources sustainability under changing Earth environment and increasing human water needs. Overwhelming observational records indicate that human activities have ubiquitous and pertinent effects on the hydrologic cycle; however, they have been crudely represented in large scale land surface models. In this study, we enhance an integrated continental-scale land hydrology model named Leaf-Hydro-Flood to better represent land-water management. The model is implemented at high resolution (5km grids) over the continental US. Surface water and groundwater are withdrawn based on actual practices. Newly added irrigation, water diversion, and dam operation schemes allow better simulations of stream flows, evapotranspiration, and infiltration. Results of various hydrologic fluxes and stores from two sets of simulation (one with and the other without human activities) are compared over a range of river basin and aquifer scales. The improved simulations of land hydrology have potential to build consistent modeling framework for human-water-climate interactions.

  19. Down image recognition based on deep convolutional neural network

    Directory of Open Access Journals (Sweden)

    Wenzhu Yang

    2018-06-01

    Full Text Available Since of the scale and the various shapes of down in the image, it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy, even for the Traditional Convolutional Neural Network (TCNN. To deal with the above problems, a Deep Convolutional Neural Network (DCNN for down image classification is constructed, and a new weight initialization method is proposed. Firstly, the salient regions of a down image were cut from the image using the visual saliency model. Then, these salient regions of the image were used to train a sparse autoencoder and get a collection of convolutional filters, which accord with the statistical characteristics of dataset. At last, a DCNN with Inception module and its variants was constructed. To improve the recognition accuracy, the depth of the network is deepened. The experiment results indicate that the constructed DCNN increases the recognition accuracy by 2.7% compared to TCNN, when recognizing the down in the images. The convergence rate of the proposed DCNN with the new weight initialization method is improved by 25.5% compared to TCNN. Keywords: Deep convolutional neural network, Weight initialization, Sparse autoencoder, Visual saliency model, Image recognition

  20. Ultra Scale-Down Characterization of the Impact of Conditioning Methods for Harvested Cell Broths on Clarification by Continuous Centrifugation—Recovery of Domain Antibodies from rec E. coli

    Science.gov (United States)

    Chatel, Alex; Kumpalume, Peter; Hoare, Mike

    2014-01-01

    The processing of harvested E. coli cell broths is examined where the expressed protein product has been released into the extracellular space. Pre-treatment methods such as freeze–thaw, flocculation, and homogenization are studied. The resultant suspensions are characterized in terms of the particle size distribution, sensitivity to shear stress, rheology and solids volume fraction, and, using ultra scale-down methods, the predicted ability to clarify the material using industrial scale continuous flow centrifugation. A key finding was the potential of flocculation methods both to aid the recovery of the particles and to cause the selective precipitation of soluble contaminants. While the flocculated material is severely affected by process shear stress, the impact on the very fine end of the size distribution is relatively minor and hence the predicted performance was only diminished to a small extent, for example, from 99.9% to 99.7% clarification compared with 95% for autolysate and 65% for homogenate at equivalent centrifugation conditions. The lumped properties as represented by ultra scale-down centrifugation results were correlated with the basic properties affecting sedimentation including particle size distribution, suspension viscosity, and solids volume fraction. Grade efficiency relationships were used to allow for the particle and flow dynamics affecting capture in the centrifuge. The size distribution below a critical diameter dependant on the broth pre-treatment type was shown to be the main determining factor affecting the clarification achieved. Biotechnol. Bioeng. 2014;111: 913–924. © 2013 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:24284936

  1. Representing soakaways in a physically distributed urban drainage model – Upscaling individual allotments to an aggregated scale

    DEFF Research Database (Denmark)

    Roldin, Maria Kerstin; Mark, Ole; Kuczera, George

    2012-01-01

    the infiltration rate based on water depth and soil properties for each time step, and controls the removal of water from the urban drainage model. The model is intended to be used to assess the impact of soakaways on urban drainage networks. The model is tested using field data and shown to simulate the behavior......The increased load on urban stormwater systems due to climate change and growing urbanization can be partly alleviated by using soakaways and similar infiltration techniques. However, while soakaways are usually small-scale structures, most urban drainage network models operate on a larger spatial...... of individual soakaways well. Six upscaling methods to aggregate individual soakaway units with varying saturated hydraulic conductivity (K) in the surrounding soil have been investigated. In the upscaled model, the weighted geometric mean hydraulic conductivity of individual allotments is found to provide...

  2. Modeling fast and slow earthquakes at various scales.

    Science.gov (United States)

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  3. Multi-scale modeling of urban air pollution: development and application of a Street-in-Grid model (v1.0) by coupling MUNICH (v1.0) and Polair3D (v1.8.1)

    OpenAIRE

    Y. Kim; Y. Wu; C. Seigneur; Y. Roustan

    2018-01-01

    A new multi-scale model of urban air pollution is presented. This model combines a chemistry–transport model (CTM) that includes a comprehensive treatment of atmospheric chemistry and transport on spatial scales down to 1 km and a street-network model that describes the atmospheric concentrations of pollutants in an urban street network. The street-network model is the Model of Urban Network of Intersecting Canyons and Highways (MUNICH), which consists of two main components...

  4. Representing Degree Distributions, Clustering, and Homophily in Social Networks With Latent Cluster Random Effects Models.

    Science.gov (United States)

    Krivitsky, Pavel N; Handcock, Mark S; Raftery, Adrian E; Hoff, Peter D

    2009-07-01

    Social network data often involve transitivity, homophily on observed attributes, clustering, and heterogeneity of actor degrees. We propose a latent cluster random effects model to represent all of these features, and we describe a Bayesian estimation method for it. The model is applicable to both binary and non-binary network data. We illustrate the model using two real datasets. We also apply it to two simulated network datasets with the same, highly skewed, degree distribution, but very different network behavior: one unstructured and the other with transitivity and clustering. Models based on degree distributions, such as scale-free, preferential attachment and power-law models, cannot distinguish between these very different situations, but our model does.

  5. Fluid-Mediated Stochastic Self-Assembly at Centimetric and Sub-Millimetric Scales: Design, Modeling, and Control

    Directory of Open Access Journals (Sweden)

    Bahar Haghighat

    2016-08-01

    Full Text Available Stochastic self-assembly provides promising means for building micro-/nano-structures with a variety of properties and functionalities. Numerous studies have been conducted on the control and modeling of the process in engineered self-assembling systems constituted of modules with varied capabilities ranging from completely reactive nano-/micro-particles to intelligent miniaturized robots. Depending on the capabilities of the constituting modules, different approaches have been utilized for controlling and modeling these systems. In the quest of a unifying control and modeling framework and within the broader perspective of investigating how stochastic control strategies can be adapted from the centimeter-scale down to the (sub-millimeter-scale, as well as from mechatronic to MEMS-based technology, this work presents the outcomes of our research on self-assembly during the past few years. As the first step, we leverage an experimental platform to study self-assembly of water-floating passive modules at the centimeter scale. A dedicated computational framework is developed for real-time tracking, modeling and control of the formation of specific structures. Using a similar approach, we then demonstrate controlled self-assembly of microparticles into clusters of a preset dimension in a microfluidic chamber, where the control loop is closed again through real-time tracking customized for a much faster system dynamics. Finally, with the aim of distributing the intelligence and realizing programmable self-assembly, we present a novel experimental system for fluid-mediated programmable stochastic self-assembly of active modules at the centimeter scale. The system is built around the water-floating 3-cm-sized Lily robots specifically designed to be operative in large swarms and allows for exploring the whole range of fully-centralized to fully-distributed control strategies. The outcomes of our research efforts extend the state-of-the-art methodologies

  6. Identifying optimal models to represent biochemical systems.

    Directory of Open Access Journals (Sweden)

    Mochamad Apri

    Full Text Available Biochemical systems involving a high number of components with intricate interactions often lead to complex models containing a large number of parameters. Although a large model could describe in detail the mechanisms that underlie the system, its very large size may hinder us in understanding the key elements of the system. Also in terms of parameter identification, large models are often problematic. Therefore, a reduced model may be preferred to represent the system. Yet, in order to efficaciously replace the large model, the reduced model should have the same ability as the large model to produce reliable predictions for a broad set of testable experimental conditions. We present a novel method to extract an "optimal" reduced model from a large model to represent biochemical systems by combining a reduction method and a model discrimination method. The former assures that the reduced model contains only those components that are important to produce the dynamics observed in given experiments, whereas the latter ensures that the reduced model gives a good prediction for any feasible experimental conditions that are relevant to answer questions at hand. These two techniques are applied iteratively. The method reveals the biological core of a model mathematically, indicating the processes that are likely to be responsible for certain behavior. We demonstrate the algorithm on two realistic model examples. We show that in both cases the core is substantially smaller than the full model.

  7. Prediction and verification of centrifugal dewatering of P. pastoris fermentation cultures using an ultra scale-down approach.

    Science.gov (United States)

    Lopes, A G; Keshavarz-Moore, E

    2012-08-01

    Recent years have seen a dramatic rise in fermentation broth cell densities and a shift to extracellular product expression in microbial cells. As a result, dewatering characteristics during cell separation is of importance, as any liquor trapped in the sediment results in loss of product, and thus a decrease in product recovery. In this study, an ultra scale-down (USD) approach was developed to enable the rapid assessment of dewatering performance of pilot-scale centrifuges with intermittent solids discharge. The results were then verified at scale for two types of pilot-scale centrifuges: a tubular bowl equipment and a disk-stack centrifuge. Initial experiments showed that employing a laboratory-scale centrifugal mimic based on using a comparable feed concentration to that of the pilot-scale centrifuge, does not successfully predict the dewatering performance at scale (P-value centrifuge. Initial experiments used Baker's yeast feed suspensions followed by fresh Pichia pastoris fermentation cultures. This work presents a simple and novel USD approach to predict dewatering levels in two types of pilot-scale centrifuges using small quantities of feedstock (centrifuge needs to be operated, reducing the need for repeated pilot-scale runs during early stages of process development. Copyright © 2012 Wiley Periodicals, Inc.

  8. Scaling of Thermal-Hydraulic Phenomena and System Code Assessment

    International Nuclear Information System (INIS)

    Wolfert, K.

    2008-01-01

    In the last five decades large efforts have been undertaken to provide reliable thermal-hydraulic system codes for the analyses of transients and accidents in nuclear power plants. Many separate effects tests and integral system tests were carried out to establish a data base for code development and code validation. In this context the question has to be answered, to what extent the results of down-scaled test facilities represent the thermal-hydraulic behaviour expected in a full-scale nuclear reactor under accidental conditions. Scaling principles, developed by many scientists and engineers, present a scientific technical basis and give a valuable orientation for the design of test facilities. However, it is impossible for a down-scaled facility to reproduce all physical phenomena in the correct temporal sequence and in the kind and strength of their occurrence. The designer needs to optimize a down-scaled facility for the processes of primary interest. This leads compulsorily to scaling distortions of other processes with less importance. Taking into account these weak points, a goal oriented code validation strategy is required, based on the analyses of separate effects tests and integral system tests as well as transients occurred in full-scale nuclear reactors. The CSNI validation matrices are an excellent basis for the fulfilling of this task. Separate effects tests in full scale play here an important role.

  9. Assessment of Prevalence of Persons with Down Syndrome: A Theory-Based Demographic Model

    Science.gov (United States)

    de Graaf, Gert; Vis, Jeroen C.; Haveman, Meindert; van Hove, Geert; de Graaf, Erik A. B.; Tijssen, Jan G. P.; Mulder, Barbara J. M.

    2011-01-01

    Background: The Netherlands are lacking reliable empirical data in relation to the development of birth and population prevalence of Down syndrome. For the UK and Ireland there are more historical empirical data available. A theory-based model is developed for predicting Down syndrome prevalence in the Netherlands from the 1950s onwards. It is…

  10. Picturing and modelling catchments by representative hillslopes

    Science.gov (United States)

    Loritz, Ralf; Hassler, Sibylle; Jackisch, Conrad; Zehe, Erwin

    2016-04-01

    Hydrological modelling studies often start with a qualitative sketch of the hydrological processes of a catchment. These so-called perceptual models are often pictured as hillslopes and are generalizations displaying only the dominant and relevant processes of a catchment or hillslope. The problem with these models is that they are prone to become too much predetermined by the designer's background and experience. Moreover it is difficult to know if that picture is correct and contains enough complexity to represent the system under study. Nevertheless, because of their qualitative form, perceptual models are easy to understand and can be an excellent tool for multidisciplinary exchange between researchers with different backgrounds, helping to identify the dominant structures and processes in a catchment. In our study we explore whether a perceptual model built upon an intensive field campaign may serve as a blueprint for setting up representative hillslopes in a hydrological model to reproduce the functioning of two distinctly different catchments. We use a physically-based 2D hillslope model which has proven capable to be driven by measured soil-hydrological parameters. A key asset of our approach is that the model structure itself remains a picture of the perceptual model, which is benchmarked against a) geo-physical images of the subsurface and b) observed dynamics of discharge, distributed state variables and fluxes (soil moisture, matric potential and sap flow). Within this approach we are able to set up two behavioral model structures which allow the simulation of the most important hydrological fluxes and state variables in good accordance with available observations within the 19.4 km2 large Colpach catchment and the 4.5 km2 large Wollefsbach catchment in Luxembourg without the necessity of calibration. This corroborates, contrary to the widespread opinion, that a) lower mesoscale catchments may be modelled by representative hillslopes and b) physically

  11. Selection of Representative Models for Decision Analysis Under Uncertainty

    Science.gov (United States)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  12. Calibration of the Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    Zyvoloski, G. A.

    2001-01-01

    The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)

  13. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  14. On the scale similarity in large eddy simulation. A proposal of a new model

    International Nuclear Information System (INIS)

    Pasero, E.; Cannata, G.; Gallerano, F.

    2004-01-01

    Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The

  15. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  16. Improving National Water Modeling: An Intercomparison of two High-Resolution, Continental Scale Models, CONUS-ParFlow and the National Water Model

    Science.gov (United States)

    Tijerina, D.; Gochis, D.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Development of integrated hydrology modeling systems that couple atmospheric, land surface, and subsurface flow is growing trend in hydrologic modeling. Using an integrated modeling framework, subsurface hydrologic processes, such as lateral flow and soil moisture redistribution, are represented in a single cohesive framework with surface processes like overland flow and evapotranspiration. There is a need for these more intricate models in comprehensive hydrologic forecasting and water management over large spatial areas, specifically the Continental US (CONUS). Currently, two high-resolution, coupled hydrologic modeling applications have been developed for this domain: CONUS-ParFlow built using the integrated hydrologic model ParFlow and the National Water Model that uses the NCAR Weather Research and Forecasting hydrological extension package (WRF-Hydro). Both ParFlow and WRF-Hydro include land surface models, overland flow, and take advantage of parallelization and high-performance computing (HPC) capabilities; however, they have different approaches to overland subsurface flow and groundwater-surface water interactions. Accurately representing large domains remains a challenge considering the difficult task of representing complex hydrologic processes, computational expense, and extensive data needs; both models have accomplished this, but have differences in approach and continue to be difficult to validate. A further exploration of effective methodology to accurately represent large-scale hydrology with integrated models is needed to advance this growing field. Here we compare the outputs of CONUS-ParFlow and the National Water Model to each other and with observations to study the performance of hyper-resolution models over large domains. Models were compared over a range of scales for major watersheds within the CONUS with a specific focus on the Mississippi, Ohio, and Colorado River basins. We use a novel set of approaches and analysis for this comparison

  17. Test program of the drop tests with full scale and 1/2.5 scale models of spent nuclear fuel transport and storage cask

    International Nuclear Information System (INIS)

    Kuri, S.; Matsuoka, T.; Kishimoto, J.; Ishiko, D.; Saito, Y.; Kimura, T.

    2004-01-01

    MHI have been developing 5 types of spent nuclear fuel transport and storage cask (MSF cask fleet) as a cask line-up. In order to demonstrate their safety, a representative cask model for the cask fleet have been designed for drop test regulated in IAEA TS-R-1. The drop test with a full and a 1/2.5 scale models are to be performed. It describes the test program of the drop test and manufacturing process of the scale models used for the tests

  18. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Science.gov (United States)

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  19. STATISTICAL MODELS OF REPRESENTING INTELLECTUAL CAPITAL

    Directory of Open Access Journals (Sweden)

    Andreea Feraru

    2016-06-01

    Full Text Available This article entitled Statistical Models of Representing Intellectual Capital approaches and analyses the concept of intellectual capital, as well as the main models which can support enterprisers/managers in evaluating and quantifying the advantages of intellectual capital. Most authors examine intellectual capital from a static perspective and focus on the development of its various evaluation models. In this chapter we surveyed the classical static models: Sveiby, Edvisson, Balanced Scorecard, as well as the canonical model of intellectual capital. Among the group of static models for evaluating organisational intellectual capital the canonical model stands out. This model enables the structuring of organisational intellectual capital in: human capital, structural capital and relational capital. Although the model is widely spread, it is a static one and can thus create a series of errors in the process of evaluation, because all the three entities mentioned above are not independent from the viewpoint of their contents, as any logic of structuring complex entities requires.

  20. How well do basic models describe the turbidity currents coming down Monterey and Congo Canyon?

    Science.gov (United States)

    Cartigny, M.; Simmons, S.; Heerema, C.; Xu, J. P.; Azpiroz, M.; Clare, M. A.; Cooper, C.; Gales, J. A.; Maier, K. L.; Parsons, D. R.; Paull, C. K.; Sumner, E. J.; Talling, P.

    2017-12-01

    Turbidity currents rival rivers in their global capacity to transport sediment and organic carbon. Furthermore, turbidity currents break submarine cables that now transport >95% of our global data traffic. Accurate turbidity current models are thus needed to quantify their transport capacity and to predict the forces exerted on seafloor structures. Despite this need, existing numerical models are typically only calibrated with scaled-down laboratory measurements due to the paucity of direct measurements of field-scale turbidity currents. This lack of calibration thus leaves much uncertainty in the validity of existing models. Here we use the most detailed observations of turbidity currents yet acquired to validate one of the most fundamental models proposed for turbidity currents, the modified Chézy model. Direct measurements on which the validation is based come from two sites that feature distinctly different flow modes and grain sizes. The first are from the multi-institution Coordinated Canyon Experiment (CCE) in Monterey Canyon, California. An array of six moorings along the canyon axis captured at least 15 flow events that lasted up to hours. The second is the deep-sea Congo Canyon, where 10 finer grained flows were measured by a single mooring, each lasting several days. Moorings captured depth-resolved velocity and suspended sediment concentration at high resolution (turbidity currents; the modified Chézy model. This basic model has been very useful for river studies over the past 200 years, as it provides a rapid estimate of how flow velocity varies with changes in river level and energy slope. Chézy-type models assume that the gravitational force of the flow equals the friction of the river-bed. Modified Chézy models have been proposed for turbidity currents. However, the absence of detailed measurements of friction and sediment concentration within full-scale turbidity currents has forced modellers to make rough assumptions for these parameters. Here

  1. A multi scale model for small scale plasticity

    International Nuclear Information System (INIS)

    Zbib, Hussein M.

    2002-01-01

    Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band

  2. A reduced-order modeling approach to represent subgrid-scale hydrological dynamics for land-surface simulations: application in a polygonal tundra landscape

    Science.gov (United States)

    Pau, G. S. H.; Bisht, G.; Riley, W. J.

    2014-09-01

    Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO2, CH4) exchanges with the atmosphere range from the molecular scale (pore-scale O2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" that reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface-subsurface isothermal simulations were performed for summer months (June-September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998-2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 103) with very small relative approximation error (training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with

  3. Modeling sediment yield in small catchments at event scale: Model comparison, development and evaluation

    Science.gov (United States)

    Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.

    2017-12-01

    Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.

  4. Small scale models equal large scale savings

    International Nuclear Information System (INIS)

    Lee, R.; Segroves, R.

    1994-01-01

    A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)

  5. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    International Nuclear Information System (INIS)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-01-01

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will

  6. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-04-19

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will

  7. Scaling dimensions in spectroscopy of soil and vegetation

    Science.gov (United States)

    Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.

    2007-05-01

    The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling

  8. Integrating the bottom-up and top-down approach to energy economy modelling. The case of Denmark

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    1998-01-01

    This paper presents results from an integration project covering Danish models based on bottom-up and top-down approaches to energy]economy modelling. The purpose of the project was to identify theoretical and methodological problems for integrating existing models for Denmark and to implement...... an integration of the models. The integration was established through a number of links between energy bottom-up modules and a macroeconomic model. In this integrated model it is possible to analyse both top-down instruments, such as taxes along with bottom-up instruments, such as regulation of technology...

  9. Top-down constraints on disturbance dynamics in the terrestrial carbon cycle: effects at global and regional scales

    Science.gov (United States)

    Bloom, A. A.; Exbrayat, J. F.; van der Velde, I.; Peters, W.; Williams, M.

    2014-12-01

    Large uncertainties preside over terrestrial carbon flux estimates on a global scale. In particular, the strongly coupled dynamics between net ecosystem productivity and disturbance C losses are poorly constrained. To gain an improved understanding of ecosystem C dynamics from regional to global scale, we apply a Markov Chain Monte Carlo based model-data-fusion approach into the CArbon DAta-MOdel fraMework (CARDAMOM). We assimilate MODIS LAI and burned area, plant-trait data, and use the Harmonized World Soil Database (HWSD) and maps of above ground biomass as prior knowledge for initial conditions. We optimize model parameters based on (a) globally spanning observations and (b) ecological and dynamic constraints that force single parameter values and parameter inter-dependencies to be representative of real world processes. We determine the spatial and temporal dynamics of major terrestrial C fluxes and model parameter values on a global scale (GPP = 123 +/- 8 Pg C yr-1 & NEE = -1.8 +/- 2.7 Pg C yr-1). We further show that the incorporation of disturbance fluxes, and accounting for their instantaneous or delayed effect, is of critical importance in constraining global C cycle dynamics, particularly in the tropics. In a higher resolution case study centred on the Amazon Basin we show how fires not only trigger large instantaneous emissions of burned matter, but also how they are responsible for a sustained reduction of up to 50% in plant uptake following the depletion of biomass stocks. The combination of these two fire-induced effects leads to a 1 g C m-2 d-1reduction in the strength of the net terrestrial carbon sink. Through our simulations at regional and global scale, we advocate the need to assimilate disturbance metrics in global terrestrial carbon cycle models to bridge the gap between globally spanning terrestrial carbon cycle data and the full dynamics of the ecosystem C cycle. Disturbances are especially important because their quick occurrence may have

  10. Diversity in the representation of large-scale circulation associated with ENSO-Indian summer monsoon teleconnections in CMIP5 models

    Science.gov (United States)

    Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.

    2018-04-01

    Realistic simulation of large-scale circulation patterns associated with El Niño-Southern Oscillation (ENSO) is vital in coupled models in order to represent teleconnections to different regions of globe. The diversity in representing large-scale circulation patterns associated with ENSO-Indian summer monsoon (ISM) teleconnections in 23 Coupled Model Intercomparison Project Phase 5 (CMIP5) models is examined. CMIP5 models have been classified into three groups based on the correlation between Niño3.4 sea surface temperature (SST) index and ISM rainfall anomalies, models in group 1 (G1) overestimated El Niño-ISM teleconections and group 3 (G3) models underestimated it, whereas these teleconnections are better represented in group 2 (G2) models. Results show that in G1 models, El Niño-induced Tropical Indian Ocean (TIO) SST anomalies are not well represented. Anomalous low-level anticyclonic circulation anomalies over the southeastern TIO and western subtropical northwest Pacific (WSNP) cyclonic circulation are shifted too far west to 60° E and 120° E, respectively. This bias in circulation patterns implies dry wind advection from extratropics/midlatitudes to Indian subcontinent. In addition to this, large-scale upper level convergence together with lower level divergence over ISM region corresponding to El Niño are stronger in G1 models than in observations. Thus, unrealistic shift in low-level circulation centers corroborated by upper level circulation changes are responsible for overestimation of ENSO-ISM teleconnections in G1 models. Warm Pacific SST anomalies associated with El Niño are shifted too far west in many G3 models unlike in the observations. Further large-scale circulation anomalies over the Pacific and ISM region are misrepresented during El Niño years in G3 models. Too strong upper-level convergence away from Indian subcontinent and too weak WSNP cyclonic circulation are prominent in most of G3 models in which ENSO-ISM teleconnections are

  11. Continuum-Scale Modeling of Liquid Redistribution in a Stack of Thin Hydrophilic Fibrous Layers

    NARCIS (Netherlands)

    Tavangarrad, A.H.; Mohebbi, Behzad; Hassanizadeh, S.M.|info:eu-repo/dai/nl/074974424; Rosati, Rodrigo; Claussen, Jan; Blümich, Bernhard

    Macroscale three-dimensional modeling of fluid flow in a thin porous layer under unsaturated conditions is a challenging task. One major issue is that such layers do not satisfy the representative elementary volume length-scale requirement. Recently, a new approach, called reduced continua model

  12. Short-term memory in Down syndrome: applying the working memory model.

    Science.gov (United States)

    Jarrold, C; Baddeley, A D

    2001-10-01

    This paper is divided into three sections. The first reviews the evidence for a verbal short-term memory deficit in Down syndrome. Existing research suggests that short-term memory for verbal information tends to be impaired in Down syndrome, in contrast to short-term memory for visual and spatial material. In addition, problems of hearing or speech do not appear to be a major cause of difficulties on tests of verbal short-term memory. This suggests that Down syndrome is associated with a specific memory problem, which we link to a potential deficit in the functioning of the 'phonological loop' of Baddeley's (1986) model of working memory. The second section considers the implications of a phonological loop problem. Because a reasonable amount is known about the normal functioning of the phonological loop, and of its role in language acquisition in typical development, we can make firm predictions as to the likely nature of the short-term memory problem in Down syndrome, and its consequences for language learning. However, we note that the existing evidence from studies with individuals with Down syndrome does not fit well with these predictions. This leads to the third section of the paper, in which we consider key questions to be addressed in future research. We suggest that there are two questions to be answered, which follow directly from the contradictory results outlined in the previous section. These are 'What is the precise nature of the verbal short-term memory deficit in Down syndrome', and 'What are the consequences of this deficit for learning'. We discuss ways in which these questions might be addressed in future work.

  13. Numerical studies of fast ion slowing down rates in cool magnetized plasma using LSP

    Science.gov (United States)

    Evans, Eugene S.; Kolmes, Elijah; Cohen, Samuel A.; Rognlien, Tom; Cohen, Bruce; Meier, Eric; Welch, Dale R.

    2016-10-01

    In MFE devices, rapid transport of fusion products from the core into the scrape-off layer (SOL) could perform the dual roles of energy and ash removal. The first-orbit trajectories of most fusion products from small field-reversed configuration (FRC) devices will traverse the SOL, allowing those particles to deposit their energy in the SOL and be exhausted along the open field lines. Thus, the fast ion slowing-down time should affect the energy balance of an FRC reactor and its neutron emissions. However, the dynamics of fast ion energy loss processes under the conditions expected in the FRC SOL (with ρe code, to examine the effects of SOL density and background B-field on the slowing-down time of fast ions in a cool plasma. As we use explicit algorithms, these simulations must spatially resolve both ρe and λDe, as well as temporally resolve both Ωe and ωpe, increasing computation time. Scaling studies of the fast ion charge (Z) and background plasma density are in good agreement with unmagnetized slowing down theory. Notably, Z-scaling represents a viable way to dramatically reduce the required CPU time for each simulation. This work was supported, in part, by DOE Contract Number DE-AC02-09CH11466.

  14. Hydrological Storage Length Scales Represented by Remote Sensing Estimates of Soil Moisture and Precipitation

    Science.gov (United States)

    Akbar, Ruzbeh; Short Gianotti, Daniel; McColl, Kaighin A.; Haghighi, Erfan; Salvucci, Guido D.; Entekhabi, Dara

    2018-03-01

    The soil water content profile is often well correlated with the soil moisture state near the surface. They share mutual information such that analysis of surface-only soil moisture is, at times and in conjunction with precipitation information, reflective of deeper soil fluxes and dynamics. This study examines the characteristic length scale, or effective depth Δz, of a simple active hydrological control volume. The volume is described only by precipitation inputs and soil water dynamics evident in surface-only soil moisture observations. To proceed, first an observation-based technique is presented to estimate the soil moisture loss function based on analysis of soil moisture dry-downs and its successive negative increments. Then, the length scale Δz is obtained via an optimization process wherein the root-mean-squared (RMS) differences between surface soil moisture observations and its predictions based on water balance are minimized. The process is entirely observation-driven. The surface soil moisture estimates are obtained from the NASA Soil Moisture Active Passive (SMAP) mission and precipitation from the gauge-corrected Climate Prediction Center daily global precipitation product. The length scale Δz exhibits a clear east-west gradient across the contiguous United States (CONUS), such that large Δz depths (>200 mm) are estimated in wetter regions with larger mean precipitation. The median Δz across CONUS is 135 mm. The spatial variance of Δz is predominantly explained and influenced by precipitation characteristics. Soil properties, especially texture in the form of sand fraction, as well as the mean soil moisture state have a lesser influence on the length scale.

  15. CFD Modelling of Biomass Combustion in Small-Scale Boilers. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Xue-Song Bai; Griselin, Niklas; Klason, Torbern; Nilsson, Johan [Lund Inst. of Tech. (Sweden). Dept. of Heat and Power Engineering

    2002-10-01

    This project deals with CFD modeling of combustion of wood in fixed bed boilers. A flamelet model for the interaction between turbulence and chemical reactions is developed and applied to study small-scale boiler. The flamelet chemistry employs 43 reactive species and 174 elementary reactions. It gives detailed distributions of important species such as CO and NO{sub x} in the flow field and flue gas. Simulation of a small-scale wood fired boiler measured at SP Boraas (50 KW) shows that the current flamelet model yields results agreeable to the available experimental data. A detailed chemical kinetic model is developed to study the bed combustion process. This model gives boundary conditions for the CFD analysis of gas phase volatile oxidation in the combustion chambers. The model combines a Functional Group submodel with a Depolymerisation, Vaporisation and Crosslinking submodel. The FG submodel simulates how functional groups decompose and form light gas species. The DVC submodell predicts depolymerisation and vaporisation of the macromolecular network and this includes bridge breaking and crosslinking processes, where the wood structure breaks down to fragments. The light fragments form tar and the heavy ones form metaplast. Two boilers firing wood log/chips are studied using the FG-DVC model, one is the SP Boraas small-scale boiler (50 KW) and the other is the Sydkraft Malmoe Vaerme AB's Flintraennan large-scale boiler (55 MW). The fix bed is assumed to be two zones, a partial equilibrium drying/devolatilisation zone and an equilibrium zone. Three typical biomass conversion modes are simulated, a lean fuel combustion mode, a near-stoichiometric combustion and a fuel rich gasification mode. Detailed chemical species and temperatures at different modes are obtained. Physical interpretation is provided. Comparison of the computational results with experimental data shows that the model can reasonably simulate the fixed bed biomass conversion process. CFD

  16. Hybrid reduced order modeling for assembly calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Y.; Abdel-Khalik, H. S. [North Carolina State University, Raleigh, NC (United States); Jessee, M. A.; Mertyurek, U. [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2013-07-01

    While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system. (authors)

  17. Scaling up from the grassroots and the top down: The impacts of multi-level governance on community forestry in Durango, Mexico

    Directory of Open Access Journals (Sweden)

    Gustavo A García-López

    2013-08-01

    Full Text Available This paper analyzes the local-level impacts of cross-scale linkages in Mexican community forestry by evaluating the operation of four inter-community forest associations (FAs. Based on one year of fieldwork in Durango, Mexico, the paper focuses on two inter-related issues: (1 the services that each association provides to their member communities and how they impact forest management and the development of communities’ forestry enterprises, and (2 the differences in services and impacts between top-down and bottom-up FAs. The findings show that FAs, as a form of cross-scale linkage, can be crucial for the provision of services, goods and infrastructure related to the protection and enhancement of community forests, the economic development of community enterprises, and the political representation of these communities. At the same time, the study finds important differences between top-down and bottom-up FAs, while pointing to some of the disadvantages of each type of linkage.

  18. Macro-scale turbulence modelling for flows in porous media

    International Nuclear Information System (INIS)

    Pinson, F.

    2006-03-01

    - This work deals with the macroscopic modeling of turbulence in porous media. It concerns heat exchangers, nuclear reactors as well as urban flows, etc. The objective of this study is to describe in an homogenized way, by the mean of a spatial average operator, turbulent flows in a solid matrix. In addition to this first operator, the use of a statistical average operator permits to handle the pseudo-aleatory character of turbulence. The successive application of both operators allows us to derive the balance equations of the kind of flows under study. Two major issues are then highlighted, the modeling of dispersion induced by the solid matrix and the turbulence modeling at a macroscopic scale (Reynolds tensor and turbulent dispersion). To this aim, we lean on the local modeling of turbulence and more precisely on the k - ε RANS models. The methodology of dispersion study, derived thanks to the volume averaging theory, is extended to turbulent flows. Its application includes the simulation, at a microscopic scale, of turbulent flows within a representative elementary volume of the porous media. Applied to channel flows, this analysis shows that even within the turbulent regime, dispersion remains one of the dominating phenomena within the macro-scale modeling framework. A two-scale analysis of the flow allows us to understand the dominating role of the drag force in the kinetic energy transfers between scales. Transfers between the mean part and the turbulent part of the flow are formally derived. This description significantly improves our understanding of the issue of macroscopic modeling of turbulence and leads us to define the sub-filter production and the wake dissipation. A f - f - w >f model is derived. It is based on three balance equations for the turbulent kinetic energy, the viscous dissipation and the wake dissipation. Furthermore, a dynamical predictor for the friction coefficient is proposed. This model is then successfully applied to the study of

  19. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Directory of Open Access Journals (Sweden)

    Sebastian McBride

    Full Text Available Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1 conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2 implementation and validation of the model into robotic hardware (as a representative of an active vision system. Seven computational requirements were identified: 1 transformation of retinotopic to egocentric mappings, 2 spatial memory for the purposes of medium-term inhibition of return, 3 synchronization of 'where' and 'what' information from the two visual streams, 4 convergence of top-down and bottom-up information to a centralized point of information processing, 5 a threshold function to elicit saccade action, 6 a function to represent task relevance as a ratio of excitation and inhibition, and 7 derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  20. A new surface-process model for landscape evolution at a mountain belt scale

    Science.gov (United States)

    Willett, Sean D.; Braun, Jean; Herman, Frederic

    2010-05-01

    We present a new surface process model designed for modeling surface erosion and mass transport at an orogenic scale. Modeling surface processes at a large-scale is difficult because surface geomorphic processes are frequently described at the scale of a few meters, and such resolution cannot be represented in orogen-scale models operating over hundreds of square kilometers. We circumvent this problem by implementing a hybrid numerical -- analytical model. Like many previous models, the model is based on a numerical fluvial network represented by a series of nodes linked by model rivers in a descending network, with fluvial incision and sediment transport defined by laws operating on this network. However we only represent the largest rivers in the landscape by nodes in this model. Low-order rivers and water divides between large rivers are determined from analytical solutions assuming steady-state conditions with respect to the local river channel. The analytical solution includes the same fluvial incision law as the large rivers and a channel head with a specified size and mean slope. This permits a precise representation of the position of water divides between river basins. This is a key characteristic in landscape evolution as divide migration provides a positive feedback between river incision and a consequent increase in drainage area. The analytical solution also provides an explicit criterion for river capture, which occurs once a water divide migrates to its neighboring channel. This algorithm avoids the artificial network organization that often results from meshing and remeshing algorithms in numerical models. We demonstrate the use of this model with several simple examples including uniform uplift of a block, simultaneous uplift and shortening of a block, and a model involving strike slip faulting. We find a strong dependence on initial condition, but also a surprisingly strong dependence on channel head height parameters. Low channel heads, as

  1. Site-scale groundwater flow modelling of Aberg and upscaling of conductivity

    International Nuclear Information System (INIS)

    Walker, Douglas; Gylling, Bjoern

    2002-04-01

    A recent performance assessment study of spent nuclear fuel disposal in Sweden, Safety Report 1997 (SR 97) included modelling of flow and transport in fractured host rocks. Hydraulic conductivity measurements in this system exhibit a strong scale dependence that needed to be addressed when determining the mean and variogram of the hydraulic conductivity for finite-difference blocks and when nesting site-scale models within regional scale models. This study applies four upscaling approaches to the groundwater flow models of Aberg, one of the hypothetical SR 97 repositories. The approaches are: 1) as in SR 97, empirically upscaling the mean conductivity via the observed scale dependence of measurements, and adjusting the covariance via numerical regularisation; 2) empirically upscaling as in SR 97, but considering fracture zones as two-dimensional features; 3) adapting the effective conductivity of stochastic continuum mechanics to upscale the mean, and geostatistical regularisation for variogram; and 4) the analytical approach of Indelman and Dagan. These four approaches are evaluated for their effects on simple measures of repository performance including the canister flux, the advective travel time from representative canister locations to the ground surface, and the F-quotient. A set of sensitivity analyses suggest that the results of the SR 97 Aberg Base Case are insensitive to minor computational changes and to the changes in the properties of minor fracture zones. The comparison of alternative approaches to upscaling indicates that, for the methods examined in this study, the greatest consistency of boundary flows between the regional and site-scale models was achieved when using the scale dependence of hydraulic conductivity observed at Aespoe for the rock domains, the hydraulic conductivities of the large-scale interference tests for the conductor domain, and a numerical regularisation based on Moye's formula for the variogram. The assumption that the

  2. Rotor scale model tests for power conversion unit of GT-MHR

    Energy Technology Data Exchange (ETDEWEB)

    Baxi, C.B.; Daugherty, R.; Shenoy, A. [General Atomics, 3550 General Atomics Court, CA (United States); Kodochigov, N.G.; Belov, S.E. [Experimental Design Bureau of Machine Building, N. Novgorad, RF (United States)

    2007-07-01

    The gas-turbine modular helium reactor (GT-MHR) combines a modular high-temperature gas-cooled reactor with a closed Brayton gas-turbine cycle power conversion unit (PCU) for thermal to electric energy conversion. The PCU has a vertical orientation and is supported on electromagnetic bearings (EMB). The Rotor Scale Model (RSM) Tests are intended to model directly the control of EMB and rotor-dynamic characteristics of the full-scale GT-MHR Turbo-machine. The objectives of the RSM tests are to: -1) confirm the EMB control system design for the GT-MHR turbo-machine over the full-range of operation, -2) confirm the redundancy and on-line maintainability features that have been specified for the EMBs, -3) provide a benchmark for validation of analytical tools that will be used for independent analyses of the EMB subsystem design, -4) provide experience with the installation, operation and maintenance of EMBs supporting multiple rotors with flexible couplings. As with the full-scale turbo-machine, the RSM will incorporate two rotors that are joined by a flexible coupling. Each of the rotors will be supported on one axial and two radial EMBs. Additional devices, similar in concept to radial EMBs, will be installed to simulate magnetic and/or mechanical forces representing those that would be seen by the exciter, generator, compressors and turbine. Overall, the length of the RSM rotor is about 1/3 that of the full-scale turbo-machine, while the diameter is approximately 1/5 scale. The design and sizing of the rotor is such that the number of critical speeds in the RSM are the same as in the full-scale turbo-machine. The EMBs will also be designed such that their response to rotor-dynamic forces is representative of the full-scale turbo-machine. (authors)

  3. A class representative model for Pure Parsimony Haplotyping under uncertain data.

    Directory of Open Access Journals (Sweden)

    Daniele Catanzaro

    Full Text Available The Pure Parsimony Haplotyping (PPH problem is a NP-hard combinatorial optimization problem that consists of finding the minimum number of haplotypes necessary to explain a given set of genotypes. PPH has attracted more and more attention in recent years due to its importance in analysis of many fine-scale genetic data. Its application fields range from mapping complex disease genes to inferring population histories, passing through designing drugs, functional genomics and pharmacogenetics. In this article we investigate, for the first time, a recent version of PPH called the Pure Parsimony Haplotype problem under Uncertain Data (PPH-UD. This version mainly arises when the input genotypes are not accurate, i.e., when some single nucleotide polymorphisms are missing or affected by errors. We propose an exact approach to solution of PPH-UD based on an extended version of Catanzaro et al.[1] class representative model for PPH, currently the state-of-the-art integer programming model for PPH. The model is efficient, accurate, compact, polynomial-sized, easy to implement, solvable with any solver for mixed integer programming, and usable in all those cases for which the parsimony criterion is well suited for haplotype estimation.

  4. Multi-scale modelling of the hydro-mechanical behaviour of argillaceous rocks

    International Nuclear Information System (INIS)

    Van den Eijnden, Bram

    2015-01-01

    Feasibility studies for deep geological radioactive waste disposal facilities have led to an increased interest in the geomechanical modelling of its host rock. In France, a potential host rock is the Callovo-Oxfordian clay-stone. The low permeability of this material is of key importance, as the principle of deep geological disposal strongly relies on the sealing capacity of the host formation. The permeability being coupled to the mechanical material state, hydro-mechanical coupled behaviour of the clay-stone becomes important when mechanical alterations are induced by gallery excavation in the so-called excavation damaged zone (EDZ). In materials with microstructure such as the Callovo-Oxfordian clay-stone, the macroscopic behaviour has its origin in the interaction of its micromechanical constituents. In addition to the coupling between hydraulic and mechanical behaviour, a coupling between the micro (material microstructure) and macro scale will be made. By means of the development of a framework of computational homogenization for hydro-mechanical coupling, a double-scale modelling approach is formulated, for which the macro-scale constitutive relations are derived from the microscale by homogenization. An existing model for the modelling of hydro-mechanical coupling based on the distinct definition of grains and intergranular pore space is adopted and modified to enable the application of first order computational homogenization for obtaining macro-scale stress and fluid transport responses. This model is used to constitute a periodic representative elementary volume (REV) that allows the representation of the local macroscopic behaviour of the clay-stone. As a response to deformation loading, the behaviour of the REV represents the numerical equivalent of a constitutive relation at the macro-scale. For the required consistent tangent operators, the framework of computational homogenization by static condensation is extended to hydro-mechanical coupling. The

  5. Countercurrent Air-Water Flow in a Scale-Down Model of a Pressurizer Surge Line

    Directory of Open Access Journals (Sweden)

    Takashi Futatsugi

    2012-01-01

    Full Text Available Steam generated in a reactor core and water condensed in a pressurizer form a countercurrent flow in a surge line between a hot leg and the pressurizer during reflux cooling. Characteristics of countercurrent flow limitation (CCFL in a 1/10-scale model of the surge line were measured using air and water at atmospheric pressure and room temperature. The experimental results show that CCFL takes place at three different locations, that is, at the upper junction, in the surge line, and at the lower junction, and its characteristics are governed by the most dominating flow limitation among the three. Effects of inclination angle and elbows of the surge line on CCFL characteristics were also investigated experimentally. The effects of inclination angle on CCFL depend on the flow direction, that is, the effect is large for the nearly horizontal flow and small for the vertical flow at the upper junction. The presence of elbows increases the flow limitation in the surge line, whereas the flow limitations at the upper and lower junctions do not depend on the presence of elbows.

  6. Test results of the SMES model coil. Cool-down and thermal characteristics

    International Nuclear Information System (INIS)

    Hamada, Kazuya; Kato, Takashi; Kawano, Katsumi

    1998-01-01

    A model coil of a superconducting magnetic energy storage (SMES) device, which is a forced-cooled Nb-Ti coil, has been fabricated and a performance test at cryogenic temperatures has been carried out. The SMES model coil is composed of 4 dual pancakes and its total weight is 4.5 t. The applied conductors are cable-in-conduit conductors cooled by supercritical helium (SHe) at 4.5 K and 0.7 MPa. SHe is supplied to the SMES model coil and the structure by a reciprocating bellows pump. The test facility is located at the International Thermonuclear Experimental Reactor (ITER) common test facility, was constructed for the testing of an ITER central solenoid model coil. In the experiments, cool-down was finished within 10 days under controlled temperature differences in the SMES model coil. During cool-down and 4.5 K operation, pressure drop characteristics of the conductor were measured and the friction factor estimated. The pressure drop characteristics of the SMES model coil were in good agreement with those of the previous cable-in-conduit conductor. During static operation without current, the heat load and refrigerator operation conditions were measured. The heat load of the SMES model coil is 7.5 W, which is within the expected value. (author)

  7. Optical analysis of down-conversion OLEDs

    Science.gov (United States)

    Krummacher, Benjamin; Klein, Markus; von Malm, Norwin; Winnacker, Albrecht

    2008-02-01

    Phosphor down-conversion of blue organic light-emitting diodes (OLEDs) is one approach to generate white light, which offers the possibility of easy color tuning, a simple device architecture and color stability over lifetime. In this article previous work on down-conversion devices in the field of organic solid state lighting is briefly reviewed. Further, bottom emitting down-conversion OLEDs are studied from an optical point of view. Therefore the physical processes occurring in the down-conversion layer are translated into a model which is implemented in a ray tracing simulation. By comparing its predictions to experimental results the model is confirmed. For the experiments a blue-emitting polymer OLED (PLED) panel optically coupled to a series of down-conversion layers is used. Based on results obtained from ray tracing simulation some of the implications of the model for the performance of down-conversion OLEDs are discussed. In particular it is analysed how the effective reflectance of the underlying blue OLED and the particle size distribution of the phosphor powder embedded in the matrix of the down-conversion layer influence extraction efficiency.

  8. Combining bottom-up and top-down

    International Nuclear Information System (INIS)

    Boehringer, Christoph; Rutherford, Thomas F.

    2008-01-01

    We motivate the formulation of market equilibrium as a mixed complementarity problem which explicitly represents weak inequalities and complementarity between decision variables and equilibrium conditions. The complementarity format permits an energy-economy model to combine technological detail of a bottom-up energy system with a second-best characterization of the over-all economy. Our primary objective is pedagogic. We first lay out the complementarity features of economic equilibrium and demonstrate how we can integrate bottom-up activity analysis into a top-down representation of the broader economy. We then provide a stylized numerical example of an integrated model - within both static and dynamic settings. Finally, we present illustrative applications to three themes figuring prominently on the energy policy agenda of many industrialized countries: nuclear phase-out, green quotas, and environmental tax reforms

  9. Combining bottom-up and top-down

    Energy Technology Data Exchange (ETDEWEB)

    Boehringer, Christoph [Department of Economics, University of Oldenburg, Oldenburg (Germany); Centre for European Economic Research (ZEW), Mannheim (Germany); Rutherford, Thomas F. [Ann Arbor, Michigan (United States)

    2008-03-15

    We motivate the formulation of market equilibrium as a mixed complementarity problem which explicitly represents weak inequalities and complementarity between decision variables and equilibrium conditions. The complementarity format permits an energy-economy model to combine technological detail of a bottom-up energy system with a second-best characterization of the over-all economy. Our primary objective is pedagogic. We first lay out the complementarity features of economic equilibrium and demonstrate how we can integrate bottom-up activity analysis into a top-down representation of the broader economy. We then provide a stylized numerical example of an integrated model - within both static and dynamic settings. Finally, we present illustrative applications to three themes figuring prominently on the energy policy agenda of many industrialized countries: nuclear phase-out, green quotas, and environmental tax reforms. (author)

  10. A Hydro-Economic Approach to Representing Water Resources Impacts in Integrated Assessment Models

    Energy Technology Data Exchange (ETDEWEB)

    Kirshen, Paul H.; Strzepek, Kenneth, M.

    2004-01-14

    Grant Number DE-FG02-98ER62665 Office of Energy Research of the U.S. Department of Energy Abstract Many Integrated Assessment Models (IAM) divide the world into a small number of highly aggregated regions. Non-OECD countries are aggregated geographically into continental and multiple-continental regions or economically by development level. Current research suggests that these large scale aggregations cannot accurately represent potential water resources-related climate change impacts. In addition, IAMs do not explicitly model the flow regulation impacts of reservoir and ground water systems, the economics of water supply, or the demand for water in economic activities. Using the International Model for Policy Analysis of Agricultural Commodities and Trade (IMPACT) model of the International Food Policy Research Institute (IFPRI) as a case study, this research implemented a set of methodologies to provide accurate representation of water resource climate change impacts in Integrated Assessment Models. There were also detailed examinations of key issues related to aggregated modeling including: modeling water consumption versus water withdrawals; ground and surface water interactions; development of reservoir cost curves; modeling of surface areas of aggregated reservoirs for estimating evaporation losses; and evaluating the importance of spatial scale in river basin modeling. The major findings include: - Continental or national or even large scale river basin aggregation of water supplies and demands do not accurately capture the impacts of climate change in the water and agricultural sector in IAMs. - Fortunately, there now exist gridden approaches (0.5 X 0.5 degrees) to model streamflows in a global analysis. The gridded approach to hydrologic modeling allows flexibility in aligning basin boundaries with national boundaries. This combined with GIS tools, high speed computers, and the growing availability of socio-economic gridded data bases allows assignment of

  11. Land surface temperature representativeness in a heterogeneous area through a distributed energy-water balance model and remote sensing data

    Directory of Open Access Journals (Sweden)

    C. Corbari

    2010-10-01

    Full Text Available Land surface temperature is the link between soil-vegetation-atmosphere fluxes and soil water content through the energy water balance. This paper analyses the representativeness of land surface temperature (LST for a distributed hydrological water balance model (FEST-EWB using LST from AHS (airborne hyperspectral scanner, with a spatial resolution between 2–4 m, LST from MODIS, with a spatial resolution of 1000 m, and thermal infrared radiometric ground measurements that are compared with the representative equilibrium temperature that closes the energy balance equation in the distributed hydrological model.

    Diurnal and nocturnal images are analyzed due to the non stable behaviour of the thermodynamic temperature and to the non linear effects induced by spatial heterogeneity.

    Spatial autocorrelation and scale of fluctuation of land surface temperature from FEST-EWB and AHS are analysed at different aggregation areas to better understand the scale of representativeness of land surface temperature in a hydrological process.

    The study site is the agricultural area of Barrax (Spain that is a heterogeneous area with a patchwork of irrigated and non irrigated vegetated fields and bare soil. The used data set was collected during a field campaign from 10 to 15 July 2005 in the framework of the SEN2FLEX project.

  12. Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.

    Science.gov (United States)

    Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric

    2016-04-01

    SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.

  13. Parametric analysis of a down-scaled turbo jet engine suitable for drone and UAV propulsion

    Science.gov (United States)

    Wessley, G. Jims John; Chauhan, Swati

    2018-04-01

    This paper presents a detailed study on the need for downscaling gas turbine engines for UAV and drone propulsion. Also, the procedure for downscaling and the parametric analysis of a downscaled engine using Gas Turbine Simulation Program software GSP 11 is presented. The need for identifying a micro gas turbine engine in the thrust range of 0.13 to 4.45 kN to power UAVs and drones weighing in the range of 4.5 to 25 kg is considered and in order to meet the requirement a parametric analysis on the scaled down Allison J33-A-35 Turbojet engine is performed. It is evident from the analysis that the thrust developed by the scaled engine and the Thrust Specific Fuel Consumption TSFC depends on pressure ratio, mass flow rate of air and Mach number. A scaling factor of 0.195 corresponding to air mass flow rate of 7.69 kg/s produces a thrust in the range of 4.57 to 5.6 kN while operating at a Mach number of 0.3 within the altitude of 5000 to 9000 m. The thermal and overall efficiency of the scaled engine is found to be 67% and 75% respectively for a pressure ratio of 2. The outcomes of this analysis form a strong base for further analysis, design and fabrication of micro gas turbine engines to propel future UAVs and drones.

  14. The reduction method of statistic scale applied to study of climatic change

    International Nuclear Information System (INIS)

    Bernal Suarez, Nestor Ricardo; Molina Lizcano, Alicia; Martinez Collantes, Jorge; Pabon Jose Daniel

    2000-01-01

    In climate change studies the global circulation models of the atmosphere (GCMAs) enable one to simulate the global climate, with the field variables being represented on a grid points 300 km apart. One particular interest concerns the simulation of possible changes in rainfall and surface air temperature due to an assumed increase of greenhouse gases. However, the models yield the climatic projections on grid points that in most cases do not correspond to the sites of major interest. To achieve local estimates of the climatological variables, methods like the one known as statistical down scaling are applied. In this article we show a case in point by applying canonical correlation analysis (CCA) to the Guajira Region in the northeast of Colombia

  15. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.

  16. Cognitive Development and Down Syndrome: Age-Related Change on the Stanford-Binet Test (Fourth Edition)

    Science.gov (United States)

    Couzens, Donna; Cuskelly, Monica; Haynes, Michele

    2011-01-01

    Growth models for subtests of the Stanford-Binet Intelligence Scale, 4th edition (R. L. Thorndike, E. P. Hagen, & J. M. Sattler, 1986a, 1986b) were developed for individuals with Down syndrome. Models were based on the assessments of 208 individuals who participated in longitudinal and cross-sectional research between 1987 and 2004. Variation…

  17. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  18. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  19. Multi-Scale Models for the Scale Interaction of Organized Tropical Convection

    Science.gov (United States)

    Yang, Qiu

    Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.

  20. Modelling hair follicle growth dynamics as an excitable medium.

    Directory of Open Access Journals (Sweden)

    Philip J Murray

    Full Text Available The hair follicle system represents a tractable model for the study of stem cell behaviour in regenerative adult epithelial tissue. However, although there are numerous spatial scales of observation (molecular, cellular, follicle and multi follicle, it is not yet clear what mechanisms underpin the follicle growth cycle. In this study we seek to address this problem by describing how the growth dynamics of a large population of follicles can be treated as a classical excitable medium. Defining caricature interactions at the molecular scale and treating a single follicle as a functional unit, a minimal model is proposed in which the follicle growth cycle is an emergent phenomenon. Expressions are derived, in terms of parameters representing molecular regulation, for the time spent in the different functional phases of the cycle, a formalism that allows the model to be directly compared with a previous cellular automaton model and experimental measurements made at the single follicle scale. A multi follicle model is constructed and numerical simulations are used to demonstrate excellent qualitative agreement with a range of experimental observations. Notably, the excitable medium equations exhibit a wider family of solutions than the previous work and we demonstrate how parameter changes representing altered molecular regulation can explain perturbed patterns in Wnt over-expression and BMP down-regulation mouse models. Further experimental scenarios that could be used to test the fundamental premise of the model are suggested. The key conclusion from our work is that positive and negative regulatory interactions between activators and inhibitors can give rise to a range of experimentally observed phenomena at the follicle and multi follicle spatial scales and, as such, could represent a core mechanism underlying hair follicle growth.

  1. Optimal experimental design in an epidermal growth factor receptor signalling and down-regulation model.

    Science.gov (United States)

    Casey, F P; Baird, D; Feng, Q; Gutenkunst, R N; Waterfall, J J; Myers, C R; Brown, K S; Cerione, R A; Sethna, J P

    2007-05-01

    We apply the methods of optimal experimental design to a differential equation model for epidermal growth factor receptor signalling, trafficking and down-regulation. The model incorporates the role of a recently discovered protein complex made up of the E3 ubiquitin ligase, Cbl, the guanine exchange factor (GEF), Cool-1 (beta -Pix) and the Rho family G protein Cdc42. The complex has been suggested to be important in disrupting receptor down-regulation. We demonstrate that the model interactions can accurately reproduce the experimental observations, that they can be used to make predictions with accompanying uncertainties, and that we can apply ideas of optimal experimental design to suggest new experiments that reduce the uncertainty on unmeasurable components of the system.

  2. Direct Scaling of Leaf-Resolving Biophysical Models from Leaves to Canopies

    Science.gov (United States)

    Bailey, B.; Mahaffee, W.; Hernandez Ochoa, M.

    2017-12-01

    Recent advances in the development of biophysical models and high-performance computing have enabled rapid increases in the level of detail that can be represented by simulations of plant systems. However, increasingly detailed models typically require increasingly detailed inputs, which can be a challenge to accurately specify. In this work, we explore the use of terrestrial LiDAR scanning data to accurately specify geometric inputs for high-resolution biophysical models that enables direct up-scaling of leaf-level biophysical processes. Terrestrial LiDAR scans generate "clouds" of millions of points that map out the geometric structure of the area of interest. However, points alone are often not particularly useful in generating geometric model inputs, as additional data processing techniques are required to provide necessary information regarding vegetation structure. A new method was developed that directly reconstructs as many leaves as possible that are in view of the LiDAR instrument, and uses a statistical backfilling technique to ensure that the overall leaf area and orientation distribution matches that of the actual vegetation being measured. This detailed structural data is used to provide inputs for leaf-resolving models of radiation, microclimate, evapotranspiration, and photosynthesis. Model complexity is afforded by utilizing graphics processing units (GPUs), which allows for simulations that resolve scales ranging from leaves to canopies. The model system was used to explore how heterogeneity in canopy architecture at various scales affects scaling of biophysical processes from leaves to canopies.

  3. Impairment of circulating endothelial progenitors in Down syndrome

    Directory of Open Access Journals (Sweden)

    Costa Valerio

    2010-09-01

    Full Text Available Abstract Background Pathological angiogenesis represents a critical issue in the progression of many diseases. Down syndrome is postulated to be a systemic anti-angiogenesis disease model, possibly due to increased expression of anti-angiogenic regulators on chromosome 21. The aim of our study was to elucidate some features of circulating endothelial progenitor cells in the context of this syndrome. Methods Circulating endothelial progenitors of Down syndrome affected individuals were isolated, in vitro cultured and analyzed by confocal and transmission electron microscopy. ELISA was performed to measure SDF-1α plasma levels in Down syndrome and euploid individuals. Moreover, qRT-PCR was used to quantify expression levels of CXCL12 gene and of its receptor in progenitor cells. The functional impairment of Down progenitors was evaluated through their susceptibility to hydroperoxide-induced oxidative stress with BODIPY assay and the major vulnerability to the infection with human pathogens. The differential expression of crucial genes in Down progenitor cells was evaluated by microarray analysis. Results We detected a marked decrease of progenitors' number in young Down individuals compared to euploid, cell size increase and some major detrimental morphological changes. Moreover, Down syndrome patients also exhibited decreased SDF-1α plasma levels and their progenitors had a reduced expression of SDF-1α encoding gene and of its membrane receptor. We further demonstrated that their progenitor cells are more susceptible to hydroperoxide-induced oxidative stress and infection with Bartonella henselae. Further, we observed that most of the differentially expressed genes belong to angiogenesis, immune response and inflammation pathways, and that infected progenitors with trisomy 21 have a more pronounced perturbation of immune response genes than infected euploid cells. Conclusions Our data provide evidences for a reduced number and altered

  4. Modeling Malicious Domain Name Take-down Dynamics: Why eCrime Pays

    Science.gov (United States)

    2014-04-01

    of take-down measures per unit of resources devoted. It is estimable by observa- tion, in principle . Block listing has been observed to be reasonably...6). The lack of non-technical aspects would be most important to the model in (6), so here this modeling choice is most acutely felt. In principle ...Wilkins company, 1925. [6] J. Henderson and R. Quandt, Microeconomic the- ory: A mathematical approach. McGraw-Hill New York, third ed., 1980. [7] T

  5. Modelling Of Monazite Ore Break-Down By Alkali Process Spectrometry

    International Nuclear Information System (INIS)

    Visetpotjanakit, Suputtra; Changkrueng, Kalaya; Pichestapong, Pipat

    2005-10-01

    A computer modelling has been developed for the calculation of mass balance of monazite ore break-down by alkali process at Rare Earth Research and Development Center. The process includes the following units : ore digestion by concentrate NaOH, dissolution of digested ore by HCl, uranium and thorium precipitation and crystallization of Na3PO4 which is by-product from this process. The model named RRDCMBP was prepared in Visual Basic language. The modelling program can be run on personal computer and it is interactive and easy to use. User is able to choose any equipment in each unit process and input data to get output of mass balance results. The model could be helpful in the process analysis for the further process adjustment and development

  6. Can we trust climate models to realistically represent severe European windstorms?

    Science.gov (United States)

    Trzeciak, Tomasz M.; Knippertz, Peter; Owen, Jennifer S. R.

    2014-05-01

    Despite the enormous advances made in climate change research, robust projections of the position and the strength of the North Atlantic stormtrack are not yet possible. In particular with respect to damaging windstorms, this incertitude bears enormous risks to European societies and the (re)insurance industry. Previous studies have addressed the problem of climate model uncertainty through statistical comparisons of simulations of the current climate with (re-)analysis data and found that there is large disagreement between different climate models, different ensemble members of the same model and observed climatologies of intense cyclones. One weakness of such statistical evaluations lies in the difficulty to separate influences of the climate model's basic state from the influence of fast processes on the development of the most intense storms. Compensating effects between the two might conceal errors and suggest higher reliability than there really is. A possible way to separate influences of fast and slow processes in climate projections is through a "seamless" approach of hindcasting historical, severe storms with climate models started from predefined initial conditions and run in a numerical weather prediction mode on the time scale of several days. Such a cost-effective case-study approach, which draws from and expands on the concepts from the Transpose-AMIP initiative, has recently been undertaken in the SEAMSEW project at the University of Leeds funded by the AXA Research Fund. Key results from this work focusing on 20 historical storms and using different lead times and horizontal and vertical resolutions include: (a) Tracks are represented reasonably well by most hindcasts. (b) Sensitivity to vertical resolution is low. (c) There is a systematic underprediction of cyclone depth for a coarse resolution of T63, but surprisingly no systematic bias is found for higher-resolution runs using T127, showing that climate models are in fact able to represent the

  7. A Testbed for Model Development

    Science.gov (United States)

    Berry, J. A.; Van der Tol, C.; Kornfeld, A.

    2014-12-01

    Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.

  8. Characterization of the Scale Model Acoustic Test Overpressure Environment using Computational Fluid Dynamics

    Science.gov (United States)

    Nielsen, Tanner; West, Jeff

    2015-01-01

    The Scale Model Acoustic Test (SMAT) is a 5% scale test of the Space Launch System (SLS), which is currently being designed at Marshall Space Flight Center (MSFC). The purpose of this test is to characterize and understand a variety of acoustic phenomena that occur during the early portions of lift off, one being the overpressure environment that develops shortly after booster ignition. The pressure waves that propagate from the mobile launcher (ML) exhaust hole are defined as the ignition overpressure (IOP), while the portion of the pressure waves that exit the duct or trench are the duct overpressure (DOP). Distinguishing the IOP and DOP in scale model test data has been difficult in past experiences and in early SMAT results, due to the effects of scaling the geometry. The speed of sound of the air and combustion gas constituents is not scaled, and therefore the SMAT pressure waves propagate at approximately the same speed as occurs in full scale. However, the SMAT geometry is twenty times smaller, allowing the pressure waves to move down the exhaust hole, through the trench and duct, and impact the vehicle model much faster than occurs at full scale. The DOP waves impact portions of the vehicle at the same time as the IOP waves, making it difficult to distinguish the different waves and fully understand the data. To better understand the SMAT data, a computational fluid dynamics (CFD) analysis was performed with a fictitious geometry that isolates the IOP and DOP. The upper and lower portions of the domain were segregated to accomplish the isolation in such a way that the flow physics were not significantly altered. The Loci/CHEM CFD software program was used to perform this analysis.

  9. Models with Men and Women: Representing Gender in Dynamic Modeling of Social Systems.

    Science.gov (United States)

    Palmer, Erika; Wilson, Benedicte

    2018-04-01

    Dynamic engineering models have yet to be evaluated in the context of feminist engineering ethics. Decision-making concerning gender in dynamic modeling design is a gender and ethical issue that is important to address regardless of the system in which the dynamic modeling is applied. There are many dynamic modeling tools that operationally include the female population, however, there is an important distinction between females and women; it is the difference between biological sex and the social construct of gender, which is fluid and changes over time and geography. The ethical oversight in failing to represent or misrepresenting gender in model design when it is relevant to the model purpose can have implications for model validity and policy model development. This paper highlights this gender issue in the context of feminist engineering ethics using a dynamic population model. Women are often represented in this type of model only in their biological capacity, while lacking their gender identity. This illustrative example also highlights how language, including the naming of variables and communication with decision-makers, plays a role in this gender issue.

  10. Device Scale Modeling of Solvent Absorption using MFIX-TFM

    Energy Technology Data Exchange (ETDEWEB)

    Carney, Janine E. [National Energy Technology Lab. (NETL), Albany, OR (United States); Finn, Justin R. [National Energy Technology Lab. (NETL), Albany, OR (United States); Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States)

    2016-10-01

    Recent climate change is largely attributed to greenhouse gases (e.g., carbon dioxide, methane) and fossil fuels account for a large majority of global CO2 emissions. That said, fossil fuels will continue to play a significant role in the generation of power for the foreseeable future. The extent to which CO2 is emitted needs to be reduced, however, carbon capture and sequestration are also necessary actions to tackle climate change. Different approaches exist for CO2 capture including both post-combustion and pre-combustion technologies, oxy-fuel combustion and/or chemical looping combustion. The focus of this effort is on post-combustion solvent-absorption technology. To apply CO2 technologies at commercial scale, the availability and maturity and the potential for scalability of that technology need to be considered. Solvent absorption is a proven technology but not at the scale needed by typical power plant. The scale up and down and design of laboratory and commercial packed bed reactors depends heavily on the specific knowledge of two-phase pressure drop, liquid holdup, the wetting efficiency and mass transfer efficiency as a function of operating conditions. Simple scaling rules often fail to provide proper design. Conventional reactor design modeling approaches will generally characterize complex non-ideal flow and mixing patterns using simplified and/or mechanistic flow assumptions. While there are varying levels of complexity used within these approaches, none of these models resolve the local velocity fields. Consequently, they are unable to account for important design factors such as flow maldistribution and channeling from a fundamental perspective. Ideally design would be aided by development of predictive models based on truer representation of the physical and chemical processes that occur at different scales. Computational fluid dynamic (CFD) models are based on multidimensional flow equations with first

  11. Model-Based, Closed-Loop Control of PZT Creep for Cavity Ring-Down Spectroscopy.

    Science.gov (United States)

    McCartt, A D; Ognibene, T J; Bench, G; Turteltaub, K W

    2014-09-01

    Cavity ring-down spectrometers typically employ a PZT stack to modulate the cavity transmission spectrum. While PZTs ease instrument complexity and aid measurement sensitivity, PZT hysteresis hinders the implementation of cavity-length-stabilized, data-acquisition routines. Once the cavity length is stabilized, the cavity's free spectral range imparts extreme linearity and precision to the measured spectrum's wavelength axis. Methods such as frequency-stabilized cavity ring-down spectroscopy have successfully mitigated PZT hysteresis, but their complexity limits commercial applications. Described herein is a single-laser, model-based, closed-loop method for cavity length control.

  12. Large eddy simulation of new subgrid scale model for three-dimensional bundle flows

    International Nuclear Information System (INIS)

    Barsamian, H.R.; Hassan, Y.A.

    2004-01-01

    Having led to increased inefficiencies and power plant shutdowns fluid flow induced vibrations within heat exchangers are of great concern due to tube fretting-wear or fatigue failures. Historically, scaling law and measurement accuracy problems were encountered for experimental analysis at considerable effort and expense. However, supercomputers and accurate numerical methods have provided reliable results and substantial decrease in cost. In this investigation Large Eddy Simulation has been successfully used to simulate turbulent flow by the numeric solution of the incompressible, isothermal, single phase Navier-Stokes equations. The eddy viscosity model and a new subgrid scale model have been utilized to model the smaller eddies in the flow domain. A triangular array flow field was considered and numerical simulations were performed in two- and three-dimensional fields, and were compared to experimental findings. Results show good agreement of the numerical findings to that of the experimental, and solutions obtained with the new subgrid scale model represent better energy dissipation for the smaller eddies. (author)

  13. Application of physical scaling towards downscaling climate model precipitation data

    Science.gov (United States)

    Gaur, Abhishek; Simonovic, Slobodan P.

    2018-04-01

    Physical scaling (SP) method downscales climate model data to local or regional scales taking into consideration physical characteristics of the area under analysis. In this study, multiple SP method based models are tested for their effectiveness towards downscaling North American regional reanalysis (NARR) daily precipitation data. Model performance is compared with two state-of-the-art downscaling methods: statistical downscaling model (SDSM) and generalized linear modeling (GLM). The downscaled precipitation is evaluated with reference to recorded precipitation at 57 gauging stations located within the study region. The spatial and temporal robustness of the downscaling methods is evaluated using seven precipitation based indices. Results indicate that SP method-based models perform best in downscaling precipitation followed by GLM, followed by the SDSM model. Best performing models are thereafter used to downscale future precipitations made by three global circulation models (GCMs) following two emission scenarios: representative concentration pathway (RCP) 2.6 and RCP 8.5 over the twenty-first century. The downscaled future precipitation projections indicate an increase in mean and maximum precipitation intensity as well as a decrease in the total number of dry days. Further an increase in the frequency of short (1-day), moderately long (2-4 day), and long (more than 5-day) precipitation events is projected.

  14. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  15. Synopsis Session III and IV 'Water and ion mobility, up-scaling and implementation in model approaches'

    International Nuclear Information System (INIS)

    2013-01-01

    The contributions of Session III 'Water and ion mobility' and Session IV 'Up-scaling and implementation in model approaches' were merged for the proceedings volume. The range of scales we are interested in starts at molecular scale (1-3 Angstrom) to crystal scale (3 Angstrom-2 nm) over particle scale with 2-200 nm dimension to the particle/macro-aggregate scale with 0.2-1500 μm. Methods available to study the particle scale concerning pore structure and connectivity which determines water mobility are under dry conditions N 2 adsorption and Hg intrusion, whereas under the hydrated state methods like X-Ray tomography and X-ray and neutron scattering are available. Going down in size molecular modeling, x-ray and neutron diffraction modeling and water adsorption gravimetry are inter alia available. There are resolution limits to the methods presented in session II (e.g. BIB-SEM) on pore characterization as e.g. the clay matrix characterization being only possible under a limited clay induration and pore throats being on the limit of resolution. These pore throats however are very important for as macroscopic phenomena observed. One methodological approach to bridge the gap between the molecular/crystal scale and the particle/macro-aggregate scale (FIB-SEM) is to use complementary techniques as cryo-NMR, N 2 and water ad-/desorption and TEM

  16. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....

  17. Scaling of two-phase flow transients using reduced pressure system and simulant fluid

    International Nuclear Information System (INIS)

    Kocamustafaogullari, G.; Ishii, M.

    1987-01-01

    Scaling criteria for a natural circulation loop under single-phase flow conditions are derived. Based on these criteria, practical applications for designing a scaled-down model are considered. Particular emphasis is placed on scaling a test model at reduced pressure levels compared to a prototype and on fluid-to-fluid scaling. The large number of similarty groups which are to be matched between modell and prototype makes the design of a scale model a challenging tasks. The present study demonstrates a new approach to this clasical problen using two-phase flow scaling parameters. It indicates that a real time scaling is not a practical solution and a scaled-down model should have an accelerated (shortened) time scale. An important result is the proposed new scaling methodology for simulating pressure transients. It is obtained by considerung the changes of the fluid property groups which appear within the two-phase similarity parameters and the single-phase to two-phase flow transition prameters. Sample calculations are performed for modeling two-phase flow transients of a high pressure water system by a low-pressure water system or a Freon system. It is shown that modeling is possible for both cases for simulation pressure transients. However, simulation of phase change transitions is not possible by a reduced pressure water system without distortion in either power or time. (orig.)

  18. Measurement and Comparison of Variance in the Performance of Algerian Universities using models of Returns to Scale Approach

    Directory of Open Access Journals (Sweden)

    Imane Bebba

    2017-08-01

    Full Text Available This study aimed to measure and compare the performance of forty-seven Algerian universities, using models of returns to Scale approach, which is based primarily on the Data Envelopment Analysis  method. In order to achieve the objective of the study, a set of variables was chosen to represent the dimension of teaching. The variables consisted of three input variables, which were:  the total number of students  in the undergraduate level, students in the post graduate level and the number of permanent professors. On the other hand, the output variable was represented by the total number of students holding degrees of the two levels. Four basic models for data envelopment analysis method were applied. These were: (Scale Returns, represented by input-oriented and output-oriented constant returns and input-oriented and output-oriented  variable returns. After the analysis of data, results revealed that eight universities achieved full efficiency according to constant returns to scale in both input and output orientations. Seventeen universities achieved full efficiency according to the model of input-oriented returns to scale variable. Sixteen universities achieved full efficiency according to the model of output-oriented  returns to scale variable. Therefore, during the performance measurement, the size of the university, competition, financial and infrastructure constraints, and the process of resource allocation within the university  should be taken into consideration. Also, multiple input and output variables reflecting the dimensions of teaching, research, and community service should be included while measuring and assessing the performance of Algerian universities, rather than using two variables which do not reflect the actual performance of these universities. Keywords: Performance of Algerian Universities, Data envelopment analysis method , Constant returns to scale, Variable returns to scale, Input-orientation, Output-orientation.

  19. Pesticide fate on catchment scale: conceptual modelling of stream CSIA data

    Science.gov (United States)

    Lutz, Stefanie R.; van der Velde, Ype; Elsayed, Omniea F.; Imfeld, Gwenaël; Lefrancq, Marie; Payraudeau, Sylvain; van Breukelen, Boris M.

    2017-10-01

    Compound-specific stable isotope analysis (CSIA) has proven beneficial in the characterization of contaminant degradation in groundwater, but it has never been used to assess pesticide transformation on catchment scale. This study presents concentration and carbon CSIA data of the herbicides S-metolachlor and acetochlor from three locations (plot, drain, and catchment outlets) in a 47 ha agricultural catchment (Bas-Rhin, France). Herbicide concentrations at the catchment outlet were highest (62 µg L-1) in response to an intense rainfall event following herbicide application. Increasing δ13C values of S-metolachlor and acetochlor by more than 2 ‰ during the study period indicated herbicide degradation. To assist the interpretation of these data, discharge, concentrations, and δ13C values of S-metolachlor were modelled with a conceptual mathematical model using the transport formulation by travel-time distributions. Testing of different model setups supported the assumption that degradation half-lives (DT50) increase with increasing soil depth, which can be straightforwardly implemented in conceptual models using travel-time distributions. Moreover, model calibration yielded an estimate of a field-integrated isotopic enrichment factor as opposed to laboratory-based assessments of enrichment factors in closed systems. Thirdly, the Rayleigh equation commonly applied in groundwater studies was tested by our model for its potential to quantify degradation on catchment scale. It provided conservative estimates on the extent of degradation as occurred in stream samples. However, largely exceeding the simulated degradation within the entire catchment, these estimates were not representative of overall degradation on catchment scale. The conceptual modelling approach thus enabled us to upscale sample-based CSIA information on degradation to the catchment scale. Overall, this study demonstrates the benefit of combining monitoring and conceptual modelling of concentration

  20. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Science.gov (United States)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and

  1. Using Scaling to Understand, Model and Predict Global Scale Anthropogenic and Natural Climate Change

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.

    2014-12-01

    The atmosphere is variable over twenty orders of magnitude in time (≈10-3 to 1017 s) and almost all of the variance is in the spectral "background" which we show can be divided into five scaling regimes: weather, macroweather, climate, macroclimate and megaclimate. We illustrate this with instrumental and paleo data. Based the signs of the fluctuation exponent H, we argue that while the weather is "what you get" (H>0: fluctuations increasing with scale), that it is macroweather (Hdecreasing with scale) - not climate - "that you expect". The conventional framework that treats the background as close to white noise and focuses on quasi-periodic variability assumes a spectrum that is in error by a factor of a quadrillion (≈ 1015). Using this scaling framework, we can quantify the natural variability, distinguish it from anthropogenic variability, test various statistical hypotheses and make stochastic climate forecasts. For example, we estimate the probability that the warming is simply a giant century long natural fluctuation is less than 1%, most likely less than 0.1% and estimate return periods for natural warming events of different strengths and durations, including the slow down ("pause") in the warming since 1998. The return period for the pause was found to be 20-50 years i.e. not very unusual; however it immediately follows a 6 year "pre-pause" warming event of almost the same magnitude with a similar return period (30 - 40 years). To improve on these unconditional estimates, we can use scaling models to exploit the long range memory of the climate process to make accurate stochastic forecasts of the climate including the pause. We illustrate stochastic forecasts on monthly and annual scale series of global and northern hemisphere surface temperatures. We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow: for example, using hindcasts we find that at 10 year forecast horizons we can still explain ≈ 15% of the

  2. Induced pluripotent stem cells as a cellular model for studying Down Syndrome

    Directory of Open Access Journals (Sweden)

    Brigida AL

    2016-11-01

    Full Text Available Down Syndrome (DS, or Trisomy 21 Syndrome, is one of the most common genetic diseases. It is a chromosomal abnormality caused by a duplication of chromosome 21. DS patients show the presence of a third copy (or a partial third copy of chromosome 21 (trisomy, as result of meiotic errors. These patients suffer of many health problems, such as intellectual disability, congenital heart disease, duodenal stenosis, Alzheimer's disease, leukemia, immune system deficiencies, muscle hypotonia and motor disorders. About one in 1000 babies born each year are affected by DS. Alterations in the dosage of genes located on chromosome 21 (also called HSA21 are responsible for the DS phenotype. However, the molecular pathogenic mechanisms of DS triggering are still not understood; newest evidences suggest the involvement of epigenetic mechanisms. For obvious ethical reasons, studies performed on DS patients, as well as on human trisomic tissues are limited. Some authors have proposed mouse models of this syndrome. However, not all the features of the syndrome are represented. Stem cells are considered the future of molecular and regenerative medicine. Several types of stem cells could provide a valid approach to offer a potential treatment for some untreatable human diseases. Stem cells also represent a valid system to develop new cell-based drugs and/or a model to study molecular disease pathways. Among stem cell types, patient-derived induced pluripotent stem (iPS cells offer some advantages for cell and tissue replacement, engineering and studying: self-renewal capacity, pluripotency and ease of accessibility to donor tissues. These cells can be reprogrammed into completely different cellular types. They are derived from adult somatic cells via reprogramming with ectopic expression of four transcription factors (Oct3/4, Sox2, c-Myc and Klf4; or, Oct3/4, Sox2, Nanog, and Lin28. By reprogramming cells from DS patients, it is possible to obtain new tissue with

  3. Bridging the scales in a eulerian air quality model to assess megacity export of pollution

    Science.gov (United States)

    Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.

    2013-08-01

    In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.

  4. German Beck Scale for Suicide Ideation (BSS): psychometric properties from a representative population survey.

    Science.gov (United States)

    Kliem, Sören; Lohmann, Anna; Mößle, Thomas; Brähler, Elmar

    2017-12-04

    Suicidal ideation has been identified as one of the major predictors of attempted or actual suicide. Routinely screening individuals for endorsing suicidal thoughts could save lives and protect many from severe psychological consequences following the suicide of loved ones. The aim of this study was to validate the German version of the Beck Scale for Suicide Ideation (BSS) in a sample representative for the Federal Republic of Germany. All 2450 participants completed the first part of the Scale, the BSS-Screen. A risk group of n = 112 individuals (4.6%) with active or passive suicidal ideation was identified and subsequently completed the entire BSS. Satisfactory internal reliability (α = .97 for the BSS-Screen; α = .94 for the entire BSS) and excellent model fit indices for the one-dimensional factorial structure of the BSS-Screen (CFI = .998; TLI = .995; RMSEA = .045 [95%-CI: .030-.061]) were confirmed. Measurement invariance analyses supported strict invariance across gender, age, and depression status. We found correlations with related self-report measures in expected directions comparable to previous studies, indicating satisfactory construct validity. Our study involved cross sectional data, hence neither predictive validity nor retest-reliability were examined. As only the risk group of n = 112 individuals completed the entire measure, confirmatory factor analyses could not be conducted for the full BSS. The German translation of the BSS is a reliable and valid instrument for assessing suicidal ideation in the general population. Using it as a screening device in general and specialized medical care could substantially advance suicide prevention.

  5. DESIGNING THE PROCESS: SCALE MODELS IN THE WORK OF KAZUYO SEJIMAAND SOU FUJIMOTO.

    Directory of Open Access Journals (Sweden)

    Marta Alonso-Provencio

    2011-03-01

    Full Text Available This paper attempts to clarify a design process that is being used by Kazuyo Sejima and Sou Fujimoto based on the use of scale models. Two typical cases are studied and represented graphically in order to map the workflow. The results reveal that the mutual influence between team members, the continuous process of production and selection are closer to an "editing process" rather than the conventional linear design process. The architectural quality and character of the work produced by Sejima and Fujimoto can be seen as a consequence of the process itself. The process based on the use of scale models becomes an object of design, and its advantages and disadvantages are discussed in this article. This systematical study is expected to offer new ideas to practitioners on how to integrate scale models in the design process and how to enhance creativity and collaborative teamwork.

  6. New signals for vector-like down-type quark in U(1) of E_6

    Science.gov (United States)

    Das, Kasinath; Li, Tianjun; Nandi, S.; Rai, Santosh Kumar

    2018-01-01

    We consider the pair production of vector-like down-type quarks in an E_6 motivated model, where each of the produced down-type vector-like quark decays into an ordinary Standard Model light quark and a singlet scalar. Both the vector-like quark and the singlet scalar appear naturally in the E_6 model with masses at the TeV scale with a favorable choice of symmetry breaking pattern. We focus on the non-standard decay of the vector-like quark and the new scalar which decays to two photons or two gluons. We analyze the signal for the vector-like quark production in the 2γ +≥ 2j channel and show how the scalar and vector-like quark masses can be determined at the Large Hadron Collider.

  7. Applying Exploratory Structural Equation Modeling to Examine the Student-Teacher Relationship Scale in a Representative Greek Sample.

    Science.gov (United States)

    Tsigilis, Nikolaos; Gregoriadis, Athanasios; Grammatikopoulos, Vasilis; Zachopoulou, Evridiki

    2018-01-01

    Teacher-child relationships in early childhood are a fundamental prerequisite for children's social, emotional, and academic development. The Student-Teacher Relationship Scale (STRS) is one of the most widely accepted and used instruments that evaluate the quality of teacher-child relationships. STRS is a 28-item questionnaire that assess three relational dimensions, Closeness, Conflict, and Dependency. The relevant literature has shown a pattern regarding the difficulty to support the STRS factor structure with CFA, while it is well-documented with EFA. Recently, a new statistical technique was proposed to combine the best of the CFA and EFA namely, the Exploratory Structural Equation Modeling (ESEM). The purpose of this study was (a) to examine the factor structure of the STRS in a Greek national sample. Toward this end, the ESEM framework was applied in order to overcome the limitations of EFA and CFA, (b) to confirm previous findings about the cultural influence in teacher-child relationship patterns, and (c) to examine the invariance of STRS across gender and age. Early educators from a representative Greek sample size of 535 child care and kindergarten centers completed the STRS for 4,158 children. CFA as well as ESEM procedures were implemented. Results showed that ESEM provided better fit to the data than CFA in both groups, supporting the argument that CFA is an overly restrictive approach in comparison to ESEM for the study of STRS. All primary loadings were statistically significant and were associated with their respective latent factors. Contrary to the existing literature conducted in USA and northern Europe, the association between Closeness and Dependency yielded a positive correlation. This finding is in line with previous studies conducted in Greece and confirm the existence of cultural differences in teacher-child relationships. In addition, findings supported the configural, metric, scalar, and variance/covariance equivalence of the STRS

  8. Applying Exploratory Structural Equation Modeling to Examine the Student-Teacher Relationship Scale in a Representative Greek Sample

    Directory of Open Access Journals (Sweden)

    Nikolaos Tsigilis

    2018-05-01

    Full Text Available Teacher-child relationships in early childhood are a fundamental prerequisite for children's social, emotional, and academic development. The Student-Teacher Relationship Scale (STRS is one of the most widely accepted and used instruments that evaluate the quality of teacher-child relationships. STRS is a 28-item questionnaire that assess three relational dimensions, Closeness, Conflict, and Dependency. The relevant literature has shown a pattern regarding the difficulty to support the STRS factor structure with CFA, while it is well-documented with EFA. Recently, a new statistical technique was proposed to combine the best of the CFA and EFA namely, the Exploratory Structural Equation Modeling (ESEM. The purpose of this study was (a to examine the factor structure of the STRS in a Greek national sample. Toward this end, the ESEM framework was applied in order to overcome the limitations of EFA and CFA, (b to confirm previous findings about the cultural influence in teacher-child relationship patterns, and (c to examine the invariance of STRS across gender and age. Early educators from a representative Greek sample size of 535 child care and kindergarten centers completed the STRS for 4,158 children. CFA as well as ESEM procedures were implemented. Results showed that ESEM provided better fit to the data than CFA in both groups, supporting the argument that CFA is an overly restrictive approach in comparison to ESEM for the study of STRS. All primary loadings were statistically significant and were associated with their respective latent factors. Contrary to the existing literature conducted in USA and northern Europe, the association between Closeness and Dependency yielded a positive correlation. This finding is in line with previous studies conducted in Greece and confirm the existence of cultural differences in teacher-child relationships. In addition, findings supported the configural, metric, scalar, and variance/covariance equivalence of

  9. Trickle-Down Preferences: Preferential Conformity to High Status Peers in Fashion Choices

    Science.gov (United States)

    Galak, Jeff; Gray, Kurt; Elbert, Igor; Strohminger, Nina

    2016-01-01

    How much do our choices represent stable inner preferences versus social conformity? We examine conformity and consistency in sartorial choices surrounding a common life event of new norm exposure: relocation. A large-scale dataset of individual purchases of women’s shoes (16,236 transactions) across five years and 2,007 women reveals a balance of conformity and consistency, moderated by changes in location socioeconomic status. Women conform to new local norms (i.e., average heel size) when moving to relatively higher status locations, but mostly ignore new local norms when moving to relatively lower status locations. In short, at periods of transition, it is the fashion norms of the rich that trickle down to consumers. These analyses provide the first naturalistic large-scale demonstration of the tension between psychological conformity and consistency, with real decisions in a highly visible context. PMID:27144595

  10. Trickle-Down Preferences: Preferential Conformity to High Status Peers in Fashion Choices.

    Directory of Open Access Journals (Sweden)

    Jeff Galak

    Full Text Available How much do our choices represent stable inner preferences versus social conformity? We examine conformity and consistency in sartorial choices surrounding a common life event of new norm exposure: relocation. A large-scale dataset of individual purchases of women's shoes (16,236 transactions across five years and 2,007 women reveals a balance of conformity and consistency, moderated by changes in location socioeconomic status. Women conform to new local norms (i.e., average heel size when moving to relatively higher status locations, but mostly ignore new local norms when moving to relatively lower status locations. In short, at periods of transition, it is the fashion norms of the rich that trickle down to consumers. These analyses provide the first naturalistic large-scale demonstration of the tension between psychological conformity and consistency, with real decisions in a highly visible context.

  11. Trickle-Down Preferences: Preferential Conformity to High Status Peers in Fashion Choices.

    Science.gov (United States)

    Galak, Jeff; Gray, Kurt; Elbert, Igor; Strohminger, Nina

    2016-01-01

    How much do our choices represent stable inner preferences versus social conformity? We examine conformity and consistency in sartorial choices surrounding a common life event of new norm exposure: relocation. A large-scale dataset of individual purchases of women's shoes (16,236 transactions) across five years and 2,007 women reveals a balance of conformity and consistency, moderated by changes in location socioeconomic status. Women conform to new local norms (i.e., average heel size) when moving to relatively higher status locations, but mostly ignore new local norms when moving to relatively lower status locations. In short, at periods of transition, it is the fashion norms of the rich that trickle down to consumers. These analyses provide the first naturalistic large-scale demonstration of the tension between psychological conformity and consistency, with real decisions in a highly visible context.

  12. Spatial scale separation in regional climate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Feser, F.

    2005-07-01

    In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter

  13. Alterations of in vivo CA1 network activity in Dp(16)1Yey Down syndrome model mice.

    Science.gov (United States)

    Raveau, Matthieu; Polygalov, Denis; Boehringer, Roman; Amano, Kenji; Yamakawa, Kazuhiro; McHugh, Thomas J

    2018-02-27

    Down syndrome, the leading genetic cause of intellectual disability, results from an extra-copy of chromosome 21. Mice engineered to model this aneuploidy exhibit Down syndrome-like memory deficits in spatial and contextual tasks. While abnormal neuronal function has been identified in these models, most studies have relied on in vitro measures. Here, using in vivo recording in the Dp(16)1Yey model, we find alterations in the organization of spiking of hippocampal CA1 pyramidal neurons, including deficits in the generation of complex spikes. These changes lead to poorer spatial coding during exploration and less coordinated activity during sharp-wave ripples, events involved in memory consolidation. Further, the density of CA1 inhibitory neurons expressing neuropeptide Y, a population key for the generation of pyramidal cell bursts, were significantly increased in Dp(16)1Yey mice. Our data refine the 'over-suppression' theory of Down syndrome pathophysiology and suggest specific neuronal subtypes involved in hippocampal dysfunction in these model mice. © 2018, Raveau et al.

  14. Energy-environment policy modeling of endogenous technological change with personal vehicles. Combining top-down and bottom-up methods

    International Nuclear Information System (INIS)

    Jaccard, Mark; Murphy, Rose; Rivers, Nic

    2004-01-01

    The transportation sector offers substantial potential for greenhouse gas (GHG) emission abatement, but widely divergent cost estimates complicate policy making; energy-economy policy modelers apply top-down and bottom-up cost definitions and different assumptions about future technologies and the preferences of firms and households. Our hybrid energy-economy policy model is technology-rich, like a bottom-up model, but has empirically estimated behavioral parameters for risk and technology preferences, like a top-down model. Unlike typical top-down models, however, it simulates technological change endogenously with functions that relate the financial costs of technologies to cumulative production and adjust technology preferences as market shares change. We apply it to the choice of personal vehicles to indicate, first, the effect on cost estimates of divergent cost definitions and, second, the possible response to policies that require a minimum market share for low emission vehicles

  15. Understanding the Representative Gut Microbiota Dysbiosis in Metformin-Treated Type 2 Diabetes Patients Using Genome-Scale Metabolic Modeling

    Directory of Open Access Journals (Sweden)

    Dorines Rosario

    2018-06-01

    Full Text Available Dysbiosis in the gut microbiome composition may be promoted by therapeutic drugs such as metformin, the world’s most prescribed antidiabetic drug. Under metformin treatment, disturbances of the intestinal microbes lead to increased abundance of Escherichia spp., Akkermansia muciniphila, Subdoligranulum variabile and decreased abundance of Intestinibacter bartlettii. This alteration may potentially lead to adverse effects on the host metabolism, with the depletion of butyrate producer genus. However, an increased production of butyrate and propionate was verified in metformin-treated Type 2 diabetes (T2D patients. The mechanisms underlying these nutritional alterations and their relation with gut microbiota dysbiosis remain unclear. Here, we used Genome-scale Metabolic Models of the representative gut bacteria Escherichia spp., I. bartlettii, A. muciniphila, and S. variabile to elucidate their bacterial metabolism and its effect on intestinal nutrient pool, including macronutrients (e.g., amino acids and short chain fatty acids, minerals and chemical elements (e.g., iron and oxygen. We applied flux balance analysis (FBA coupled with synthetic lethality analysis interactions to identify combinations of reactions and extracellular nutrients whose absence prevents growth. Our analyses suggest that Escherichia sp. is the bacteria least vulnerable to nutrient availability. We have also examined bacterial contribution to extracellular nutrients including short chain fatty acids, amino acids, and gasses. For instance, Escherichia sp. and S. variabile may contribute to the production of important short chain fatty acids (e.g., acetate and butyrate, respectively involved in the host physiology under aerobic and anaerobic conditions. We have also identified pathway susceptibility to nutrient availability and reaction changes among the four bacteria using both FBA and flux variability analysis. For instance, lipopolysaccharide synthesis, nucleotide sugar

  16. Site-scale groundwater flow modelling of Ceberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1999-06-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracturezones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of {epsilon}{sub f} 10{sup -4} and a flow-wetted surface area of a{sub r} = 0.1 m{sup 2}/(m{sup 3} rock): The median travel time is 1720 years. The median canister flux is 3.27x10{sup -5} m/year. The median F-ratio is 1.72x10{sup 6} years/m. The base case and the deterministic variant suggest that the variability of the travel times within

  17. Site-scale groundwater flow modelling of Ceberg

    International Nuclear Information System (INIS)

    Walker, D.; Gylling, B.

    1999-06-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracture zones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of ε f 10 -4 and a flow-wetted surface area of a r = 0.1 m 2 /(m 3 rock): The median travel time is 1720 years. The median canister flux is 3.27x10 -5 m/year. The median F-ratio is 1.72x10 6 years/m. The base case and the deterministic variant suggest that the variability of the travel times within individual realisations is due to the

  18. Two-scale modelling for hydro-mechanical damage

    International Nuclear Information System (INIS)

    Frey, J.; Chambon, R.; Dascalu, C.

    2010-01-01

    Document available in extended abstract form only. Excavation works for underground storage create a damage zone for the rock nearby and affect its hydraulics properties. This degradation, already observed by laboratory tests, can create a leading path for fluids. The micro fracture phenomenon, which occur at a smaller scale and affect the rock permeability, must be fully understood to minimize the transfer process. Many methods can be used in order to take into account the microstructure of heterogeneous materials. Among them a method has been developed recently. Instead of using a constitutive equation obtained by phenomenological considerations or by some homogenization techniques, the representative elementary volume (R.E.V.) is modelled as a structure and the links between a prescribed kinematics and the corresponding dual forces are deduced numerically. This yields the so called Finite Element square method (FE2). In a numerical point of view, a finite element model is used at the macroscopic level, and for each Gauss point, computations on the microstructure gives the usual results of a constitutive law. This numerical approach is now classical in order to properly model some materials such as composites and the efficiency of such numerical homogenization process has been shown, and allows numerical modelling of deformation processes associated with various micro-structural changes. The aim of this work is to describe trough such a method, damage of the rock with a two scale hydro-mechanical model. The rock damage at the macroscopic scale is directly link with an analysis on the microstructure. At the macroscopic scale a two phase's problem is studied. A solid skeleton is filled up by a filtrating fluid. It is necessary to enforce two balance equation and two mass conservation equations. A classical way to deal with such a problem is to work with the balance equation of the whole mixture, and the mass fluid conservation written in a weak form, the mass

  19. Knowledge environments representing molecular entities for the virtual physiological human.

    Science.gov (United States)

    Hofmann-Apitius, Martin; Fluck, Juliane; Furlong, Laura; Fornes, Oriol; Kolárik, Corinna; Hanser, Susanne; Boeker, Martin; Schulz, Stefan; Sanz, Ferran; Klinger, Roman; Mevissen, Theo; Gattermayer, Tobias; Oliva, Baldo; Friedrich, Christoph M

    2008-09-13

    In essence, the virtual physiological human (VPH) is a multiscale representation of human physiology spanning from the molecular level via cellular processes and multicellular organization of tissues to complex organ function. The different scales of the VPH deal with different entities, relationships and processes, and in consequence the models used to describe and simulate biological functions vary significantly. Here, we describe methods and strategies to generate knowledge environments representing molecular entities that can be used for modelling the molecular scale of the VPH. Our strategy to generate knowledge environments representing molecular entities is based on the combination of information extraction from scientific text and the integration of information from biomolecular databases. We introduce @neuLink, a first prototype of an automatically generated, disease-specific knowledge environment combining biomolecular, chemical, genetic and medical information. Finally, we provide a perspective for the future implementation and use of knowledge environments representing molecular entities for the VPH.

  20. GCFR 1/20-scale PCRV central core cavity closure model test

    International Nuclear Information System (INIS)

    Robinson, G.C.; Dougan, J.R.

    1981-06-01

    Oak Ridge National Laboratory has been conducting structural response tests of the prestressed concrete reactor vessel (PCRV) closures for the 300-MW(e) gas-cooled fast reactor demonstration power plant. This report describes the third in a series of tests of small-scale closure plug models. The model represents a redesign of the central core cavity closure plug. The primary objective was to demonstrate structural performance and ultimate load capacity of the closure plug. Secondary objectives included obtaining data on crack development and propagation and on mode of failure of the composite structure

  1. Development of the Transport Class Model (TCM) Aircraft Simulation From a Sub-Scale Generic Transport Model (GTM) Simulation

    Science.gov (United States)

    Hueschen, Richard M.

    2011-01-01

    A six degree-of-freedom, flat-earth dynamics, non-linear, and non-proprietary aircraft simulation was developed that is representative of a generic mid-sized twin-jet transport aircraft. The simulation was developed from a non-proprietary, publicly available, subscale twin-jet transport aircraft simulation using scaling relationships and a modified aerodynamic database. The simulation has an extended aerodynamics database with aero data outside the normal transport-operating envelope (large angle-of-attack and sideslip values). The simulation has representative transport aircraft surface actuator models with variable rate-limits and generally fixed position limits. The simulation contains a generic 40,000 lb sea level thrust engine model. The engine model is a first order dynamic model with a variable time constant that changes according to simulation conditions. The simulation provides a means for interfacing a flight control system to use the simulation sensor variables and to command the surface actuators and throttle position of the engine model.

  2. Ground-water solute transport modeling using a three-dimensional scaled model

    International Nuclear Information System (INIS)

    Crider, S.S.

    1987-01-01

    Scaled models are used extensively in current hydraulic research on sediment transport and solute dispersion in free surface flows (rivers, estuaries), but are neglected in current ground-water model research. Thus, an investigation was conducted to test the efficacy of a three-dimensional scaled model of solute transport in ground water. No previous results from such a model have been reported. Experiments performed on uniform scaled models indicated that some historical problems (e.g., construction and scaling difficulties; disproportionate capillary rise in model) were partly overcome by using simple model materials (sand, cement and water), by restricting model application to selective classes of problems, and by physically controlling the effect of the model capillary zone. Results from these tests were compared with mathematical models. Model scaling laws were derived for ground-water solute transport and used to build a three-dimensional scaled model of a ground-water tritium plume in a prototype aquifer on the Savannah River Plant near Aiken, South Carolina. Model results compared favorably with field data and with a numerical model. Scaled models are recommended as a useful additional tool for prediction of ground-water solute transport

  3. Transcriptional and metabolic response of recombinant Escherichia coli to spatial dissolved oxygen tension gradients simulated in a scale-down system.

    Science.gov (United States)

    Lara, Alvaro R; Leal, Lidia; Flores, Noemí; Gosset, Guillermo; Bolívar, Francisco; Ramírez, Octavio T

    2006-02-05

    Escherichia coli, expressing recombinant green fluorescent protein (GFP), was subjected to dissolved oxygen tension (DOT) oscillations in a two-compartment system for simulating gradients that can occur in large-scale bioreactors. Cells were continuously circulated between the anaerobic (0% DOT) and aerobic (10% DOT) vessels of the scale-down system to mimic an overall circulation time of 50 s, and a mean residence time in the anaerobic and aerobic compartments of 33 and 17 s, respectively. Transcription levels of mixed acid fermentation genes (ldhA, poxB, frdD, ackA, adhE, pflD, and fdhF), measured by quantitative RT-PCR, increased between 1.5- to over 6-fold under oscillatory DOT compared to aerobic cultures (constant 10% DOT). In addition, the transcription level of fumB increased whereas it decreased for sucA and sucB, suggesting that the tricarboxylic acid cycle was functioning as two open branches. Gene transcription levels revealed that cytrochrome bd, which has higher affinity to oxygen but lower energy efficiency, was preferred over cytochrome bO3 in oscillatory DOT cultures. Post-transcriptional processing limited heterologous protein production in the scale-down system, as inferred from similar gfp transcription but 19% lower GFP concentration compared to aerobic cultures. Simulated DOT gradients also affected the transcription of genes of the glyoxylate shunt (aceA), of global regulators of aerobic and anaerobic metabolism (fnr, arcA, and arcB), and other relevant genes (luxS, sodA, fumA, and sdhB). Transcriptional changes explained the observed alterations in overall stoichiometric and kinetic parameters, and production of ethanol and organic acids. Differences in transcription levels between aerobic and anaerobic compartments were also observed, indicating that E. coli can respond very fast to intermittent DOT conditions. The transcriptional responses of E. coli to DOT gradients reported here are useful for establishing rational scale-up criteria and

  4. A Statistical and Spectral Model for Representing Noisy Sounds with Short-Time Sinusoids

    Directory of Open Access Journals (Sweden)

    Myriam Desainte-Catherine

    2005-07-01

    Full Text Available We propose an original model for noise analysis, transformation, and synthesis: the CNSS model. Noisy sounds are represented with short-time sinusoids whose frequencies and phases are random variables. This spectral and statistical model represents information about the spectral density of frequencies. This perceptually relevant property is modeled by three mathematical parameters that define the distribution of the frequencies. This model also represents the spectral envelope. The mathematical parameters are defined and the analysis algorithms to extract these parameters from sounds are introduced. Then algorithms for generating sounds from the parameters of the model are presented. Applications of this model include tools for composers, psychoacoustic experiments, and pedagogy.

  5. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  6. A new approach for modeling and analysis of molten salt reactors using SCALE

    Energy Technology Data Exchange (ETDEWEB)

    Powers, J. J.; Harrison, T. J.; Gehin, J. C. [Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831-6172 (United States)

    2013-07-01

    The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)

  7. A new approach for modeling and analysis of molten salt reactors using SCALE

    International Nuclear Information System (INIS)

    Powers, J. J.; Harrison, T. J.; Gehin, J. C.

    2013-01-01

    The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)

  8. Scaling Analysis of the Single-Phase Natural Circulation: the Hydraulic Similarity

    International Nuclear Information System (INIS)

    Yu, Xin-Guo; Choi, Ki-Yong

    2015-01-01

    These passive safety systems all rely on the natural circulation to cool down the reactor cores during an accident. Thus, a robust and accurate scaling methodology must be developed and employed to both assist in the design of a scaled-down test facility and guide the tests in order to mimic the natural circulation flow of its prototype. The natural circulation system generally consists of a heat source, the connecting pipes and several heat sinks. Although many applauding scaling methodologies have been proposed during last several decades, few works have been dedicated to systematically analyze and exactly preserve the hydraulic similarity. In the present study, the hydraulic similarity analyses are performed at both system and local level. By this mean, the scaling criteria for the exact hydraulic similarity in a full-pressure model have been sought. In other words, not only the system-level but also the local-level hydraulic similarities are pursued. As the hydraulic characteristics of a fluid system is governed by the momentum equation, the scaling analysis starts with it. A dimensionless integral loop momentum equation is derived to obtain the dimensionless numbers. In the dimensionless momentum equation, two dimensionless numbers, the dimensionless flow resistance number and the dimensionless gravitational force number, are identified along with a unique hydraulic time scale, characterizing the system hydraulic response. A full-height full-pressure model is also made to see which model among the full-height model and reduced-height model can preserve the hydraulic behavior of the prototype. From the dimensionless integral momentum equation, a unique hydraulic time scale, which characterizes the hydraulic response of a single-phase natural circulation system, is identified along with two dimensionless parameters: the dimensionless flow resistance number and the dimensionless gravitational force number. By satisfying the equality of both dimensionless numbers

  9. Scaling Analysis of the Single-Phase Natural Circulation: the Hydraulic Similarity

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Xin-Guo; Choi, Ki-Yong [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    These passive safety systems all rely on the natural circulation to cool down the reactor cores during an accident. Thus, a robust and accurate scaling methodology must be developed and employed to both assist in the design of a scaled-down test facility and guide the tests in order to mimic the natural circulation flow of its prototype. The natural circulation system generally consists of a heat source, the connecting pipes and several heat sinks. Although many applauding scaling methodologies have been proposed during last several decades, few works have been dedicated to systematically analyze and exactly preserve the hydraulic similarity. In the present study, the hydraulic similarity analyses are performed at both system and local level. By this mean, the scaling criteria for the exact hydraulic similarity in a full-pressure model have been sought. In other words, not only the system-level but also the local-level hydraulic similarities are pursued. As the hydraulic characteristics of a fluid system is governed by the momentum equation, the scaling analysis starts with it. A dimensionless integral loop momentum equation is derived to obtain the dimensionless numbers. In the dimensionless momentum equation, two dimensionless numbers, the dimensionless flow resistance number and the dimensionless gravitational force number, are identified along with a unique hydraulic time scale, characterizing the system hydraulic response. A full-height full-pressure model is also made to see which model among the full-height model and reduced-height model can preserve the hydraulic behavior of the prototype. From the dimensionless integral momentum equation, a unique hydraulic time scale, which characterizes the hydraulic response of a single-phase natural circulation system, is identified along with two dimensionless parameters: the dimensionless flow resistance number and the dimensionless gravitational force number. By satisfying the equality of both dimensionless numbers

  10. Scaling laws for modeling nuclear reactor systems

    International Nuclear Information System (INIS)

    Nahavandi, A.N.; Castellana, F.S.; Moradkhanian, E.N.

    1979-01-01

    Scale models are used to predict the behavior of nuclear reactor systems during normal and abnormal operation as well as under accident conditions. Three types of scaling procedures are considered: time-reducing, time-preserving volumetric, and time-preserving idealized model/prototype. The necessary relations between the model and the full-scale unit are developed for each scaling type. Based on these relationships, it is shown that scaling procedures can lead to distortion in certain areas that are discussed. It is advised that, depending on the specific unit to be scaled, a suitable procedure be chosen to minimize model-prototype distortion

  11. Development and Application of an Integrated Model for Representing Hydrologic Processes and Irrigation at Residential Scale in Semiarid and Mediterranean Regions

    Science.gov (United States)

    Herrera, J. B.; Gironas, J. A.; Bonilla, C. A.; Vera, S.; Reyes, F. R.

    2015-12-01

    Urbanization alters physical and biological processes that take place in natural environments. New impervious areas change the hydrological processes, reducing infiltration and evapotranspiration and increasing direct runoff volumes and flow discharges. To reduce these effects at local scale, sustainable urban drainage systems, low impact development and best management practices have been developed and implemented. These technologies, which typically consider some type of green infrastructure (GI), simulate natural processes of capture, retention and infiltration to control flow discharges from frequent events and preserve the hydrological cycle. Applying these techniques in semiarid regions requires accounting for aspects related to the maintenance of green areas, such as the irrigation needs and the selection of the vegetation. This study develops the Integrated Hydrological Model at Residential Scale, IHMORS, which is a continuous model that simulates the most relevant hydrological processes together with irrigation processes of green areas. In the model contributing areas and drainage control practices are modeled by combining and connecting differents subareas subjected to surface processes (i.e. interception, evapotranspiration, infiltration and surface runoff) and sub-surface processes (percolation, redistribution and subsurface runoff). The model simulates these processes and accounts for the dynamics of the water content in different soil layers. The different components of the model were first tested using laboratory and numerical experiments, and then an application to a case study was carried out. In this application we assess the long-term performance in terms of runoff control and irrigation needs of green gardens with different vegetation, under different climate and irrigation practices. The model identifies significant differences in the performance of the alternatives and provides a good insight for the maintenance needs of GI for runoff control.

  12. Groundwater Flow and Thermal Modeling to Support a Preferred Conceptual Model for the Large Hydraulic Gradient North of Yucca Mountain

    International Nuclear Information System (INIS)

    McGraw, D.; Oberlander, P.

    2007-01-01

    The purpose of this study is to report on the results of a preliminary modeling framework to investigate the causes of the large hydraulic gradient north of Yucca Mountain. This study builds on the Saturated Zone Site-Scale Flow and Transport Model (referenced herein as the Site-scale model (Zyvoloski, 2004a)), which is a three-dimensional saturated zone model of the Yucca Mountain area. Groundwater flow was simulated under natural conditions. The model framework and grid design describe the geologic layering and the calibration parameters describe the hydrogeology. The Site-scale model is calibrated to hydraulic heads, fluid temperature, and groundwater flowpaths. One area of interest in the Site-scale model represents the large hydraulic gradient north of Yucca Mountain. Nearby water levels suggest over 200 meters of hydraulic head difference in less than 1,000 meters horizontal distance. Given the geologic conceptual models defined by various hydrogeologic reports (Faunt, 2000, 2001; Zyvoloski, 2004b), no definitive explanation has been found for the cause of the large hydraulic gradient. Luckey et al. (1996) presents several possible explanations for the large hydraulic gradient as provided below: The gradient is simply the result of flow through the upper volcanic confining unit, which is nearly 300 meters thick near the large gradient. The gradient represents a semi-perched system in which flow in the upper and lower aquifers is predominantly horizontal, whereas flow in the upper confining unit would be predominantly vertical. The gradient represents a drain down a buried fault from the volcanic aquifers to the lower Carbonate Aquifer. The gradient represents a spillway in which a fault marks the effective northern limit of the lower volcanic aquifer. The large gradient results from the presence at depth of the Eleana Formation, a part of the Paleozoic upper confining unit, which overlies the lower Carbonate Aquifer in much of the Death Valley region. The

  13. Modelling of Spring Constant and Pull-down Voltage of Non-uniform RF MEMS Cantilever Incorporating Stress Gradient

    Directory of Open Access Journals (Sweden)

    Shimul Chandra SAHA

    2008-11-01

    Full Text Available We have presented a model for spring constant and pull-down voltage of a non-uniform radio frequency microelectromechanical systems (RF MEMS cantilever that works on electrostatic actuation. The residual stress gradient in the beam material that may arise during the fabrication process is also considered in the model. Using basic force deflection calculation of the suspended beam, a stand-alone model for the spring constant and pull-down voltage of the non-uniform cantilever is developed. To compare the model, simulation is performed using standard Finite Element Method (FEM analysis tolls from CoventorWare. The model matches very well with the FEM simulation results. The model will offer an efficient means of design, analysis, and optimization of RF MEMS cantilever switches.

  14. New signals for vector-like down-type quark in U(1) of E{sub 6}

    Energy Technology Data Exchange (ETDEWEB)

    Das, Kasinath; Rai, Santosh Kumar [Harish-Chandra Research Institute, HBNI, Regional Centre for Accelerator-based Particle Physics, Allahabad (India); Li, Tianjun [Chinese Academy of Sciences, CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Beijing (China); University of Chinese Academy of Sciences, School of Physical Sciences, Beijing (China); Nandi, S. [Oklahoma State University, Department of Physics and Oklahoma Center for High Energy Physics, Stillwater, OK (United States)

    2018-01-15

    We consider the pair production of vector-like down-type quarks in an E{sub 6} motivated model, where each of the produced down-type vector-like quark decays into an ordinary Standard Model light quark and a singlet scalar. Both the vector-like quark and the singlet scalar appear naturally in the E{sub 6} model with masses at the TeV scale with a favorable choice of symmetry breaking pattern. We focus on the non-standard decay of the vector-like quark and the new scalar which decays to two photons or two gluons. We analyze the signal for the vector-like quark production in the 2γ+ ≥ 2j channel and show how the scalar and vector-like quark masses can be determined at the Large Hadron Collider. (orig.)

  15. Down syndrome: coercion and eugenics.

    Science.gov (United States)

    McCabe, Linda L; McCabe, Edward R B

    2011-08-01

    Experts agree that coercion by insurance companies or governmental authorities to limit reproductive choice constitutes a eugenic practice. We discuss discrimination against families of children with Down syndrome who chose not to have prenatal testing or chose to continue a pregnancy after a prenatal diagnosis. We argue that this discrimination represents economic and social coercion to limit reproductive choice, and we present examples of governmental rhetoric and policies condoning eugenics and commercial policies meeting criteria established by experts for eugenics. Our purpose is to sensitize the clinical genetics community to these issues as we attempt to provide the most neutral nondirective prenatal genetic counseling we can, and as we provide postnatal care and counseling to children with Down syndrome and their families. We are concerned that if eugenic policies and practices targeting individuals with Down syndrome and their families are tolerated by clinical geneticists and the broader citizenry, then we increase the probability of eugenics directed toward other individuals and communities.

  16. Constraining Genome-Scale Models to Represent the Bow Tie Structure of Metabolism for 13C Metabolic Flux Analysis

    Directory of Open Access Journals (Sweden)

    Tyler W. H. Backman

    2018-01-01

    Full Text Available Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13 C Metabolic Flux Analysis ( 13 C MFA and Two-Scale 13 C Metabolic Flux Analysis (2S- 13 C MFA are two techniques used to determine such fluxes. Both operate on the simplifying approximation that metabolic flux from peripheral metabolism into central “core” carbon metabolism is minimal, and can be omitted when modeling isotopic labeling in core metabolism. The validity of this “two-scale” or “bow tie” approximation is supported both by the ability to accurately model experimental isotopic labeling data, and by experimentally verified metabolic engineering predictions using these methods. However, the boundaries of core metabolism that satisfy this approximation can vary across species, and across cell culture conditions. Here, we present a set of algorithms that (1 systematically calculate flux bounds for any specified “core” of a genome-scale model so as to satisfy the bow tie approximation and (2 automatically identify an updated set of core reactions that can satisfy this approximation more efficiently. First, we leverage linear programming to simultaneously identify the lowest fluxes from peripheral metabolism into core metabolism compatible with the observed growth rate and extracellular metabolite exchange fluxes. Second, we use Simulated Annealing to identify an updated set of core reactions that allow for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13 C MFA or 2S- 13 C MFA, as well as provide for a substantially lower set of flux bounds for fluxes into the core as compared with previous methods. We provide an open source Python implementation of these algorithms at https://github.com/JBEI/limitfluxtocore.

  17. Comparing models of the periodic variations in spin-down and beamwidth for PSR B1828-11

    Science.gov (United States)

    Ashton, G.; Jones, D. I.; Prix, R.

    2016-05-01

    We build a framework using tools from Bayesian data analysis to evaluate models explaining the periodic variations in spin-down and beamwidth of PSR B1828-11. The available data consist of the time-averaged spin-down rate, which displays a distinctive double-peaked modulation, and measurements of the beamwidth. Two concepts exist in the literature that are capable of explaining these variations; we formulate predictive models from these and quantitatively compare them. The first concept is phenomenological and stipulates that the magnetosphere undergoes periodic switching between two metastable states as first suggested by Lyne et al. The second concept, precession, was first considered as a candidate for the modulation of B1828-11 by Stairs et al. We quantitatively compare models built from these concepts using a Bayesian odds ratio. Because the phenomenological switching model itself was informed by these data in the first place, it is difficult to specify appropriate parameter-space priors that can be trusted for an unbiased model comparison. Therefore, we first perform a parameter estimation using the spin-down data, and then use the resulting posterior distributions as priors for model comparison on the beamwidth data. We find that a precession model with a simple circular Gaussian beam geometry fails to appropriately describe the data, while allowing for a more general beam geometry provides a good fit to the data. The resulting odds between the precession model (with a general beam geometry) and the switching model are estimated as 102.7±0.5 in favour of the precession model.

  18. The Macdonald and Savage titrimetric procedure scaled down to 4 mg sized plutonium samples. P. 1

    International Nuclear Information System (INIS)

    Kuvik, V.; Lecouteux, C.; Doubek, N.; Ronesch, K.; Jammet, G.; Bagliano, G.; Deron, S.

    1992-01-01

    The original Macdonald and Savage amperometric method scaled down to milligram-sized plutonium samples was further modified. The electro-chemical process of each redox step and the end-point of the final titration were monitored potentiometrically. The method is designed to determine 4 mg of plutonium dissolved in nitric acid solution. It is suitable for the direct determination of plutonium in non-irradiated fuel with a uranium-to-plutonium ratio of up to 30. The precision and accuracy are ca. 0.05-0.1% (relative standard deviation). Although the procedure is very selective, the following species interfere: vanadyl(IV) and vanadate (almost quantitatively), neptunium (one electron exchange per mole), nitrites, fluorosilicates (milligram amounts yield a slight bias) and iodates. (author). 15 refs.; 8 figs.; 7 tabs

  19. MODELING THE SUN’S SMALL-SCALE GLOBAL PHOTOSPHERIC MAGNETIC FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, K. A. [Division of Computing and Mathematics, Abertay University, Kydd Building, Dundee, Bell Street, DD1 1HG, Scotland (United Kingdom); Mackay, D. H., E-mail: k.meyer@abertay.ac.uk [School of Mathematics and Statistics, University of St Andrews, North Haugh, St Andrews, KY16 9SS, Scotland (United Kingdom)

    2016-10-20

    We present a new model for the Sun’s global photospheric magnetic field during a deep minimum of activity, in which no active regions emerge. The emergence and subsequent evolution of small-scale magnetic features across the full solar surface is simulated, subject to the influence of a global supergranular flow pattern. Visually, the resulting simulated magnetograms reproduce the typical structure and scale observed in quiet Sun magnetograms. Quantitatively, the simulation quickly reaches a steady state, resulting in a mean field and flux distribution that are in good agreement with those determined from observations. A potential coronal magnetic field is extrapolated from the simulated full Sun magnetograms to consider the implications of such a quiet photospheric magnetic field on the corona and inner heliosphere. The bulk of the coronal magnetic field closes very low down, in short connections between small-scale features in the simulated magnetic network. Just 0.1% of the photospheric magnetic flux is found to be open at 2.5 R {sub ⊙}, around 10–100 times less than that determined for typical Helioseismic and Magnetic Imager synoptic map observations. If such conditions were to exist on the Sun, this would lead to a significantly weaker interplanetary magnetic field than is currently observed, and hence a much higher cosmic ray flux at Earth.

  20. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  1. LES of n-Dodecane Spray Combustion Using a Multiple Representative Interactive Flamelets Model

    Directory of Open Access Journals (Sweden)

    Davidovic Marco

    2017-09-01

    Full Text Available A single-hole n-dodecane spray flame is studied in a Large-Eddy Simulation (LES framework under Diesel-relevant conditions using a Multiple Representative Interactive Flamelets (MRIF combustion model. Diesel spray combustion is strongly affected by the mixture formation process, which is dominated by several physical processes such as the flow within the injector, break-up of the liquid fuel jet, evaporation and turbulent mixing with the surrounding gas. While the effects of nozzle-internal flow and primary breakup are captured within tuned model parameters in traditional Lagrangian spray models, an alternative approach is applied in this study, where the initial droplet conditions and primary fuel jet breakup are modeled based on results from highly resolved multiphase simulations with resolved interface. A highly reduced chemical mechanism consisting of 57 species and 217 reactions has been developed for n-dodecane achiving a good computational performance at solving the chemical reactions. The MRIF model, which has demonstrated its capability of capturing combustion and pollutant formation under typical Diesel conditions in Reynolds-Averaged Navier-Stokes (RANS simulations is extended for the application in LES. In the standard RIF combustion model, representative chemistry conditioned on mixture fraction is solved interactively with the flow. Subfilter-scale mixing is modeled by the scalar dissipation rate. While the standard RIF model only includes temporal changes of the scalar dissipation rate, the spatial distribution can be accounted for by extending the model to multiple flamelets, which also enables the possibility of capturing different fuel residence times. Overall, the model shows good agreement with experimental data regarding both, low and high temperature combustion characteristics. It is shown that the ignition process and pollutant formation are affected by turbulent mixing. First, a cool flame is initiated at approximately

  2. Hydrogeologic Framework Model for the Saturated-Zone Site-Scale Flow

    Energy Technology Data Exchange (ETDEWEB)

    Z. Peterman

    2003-03-05

    Yucca Mountain is being evaluated as a potential site for development of a geologic repository for the permanent disposal of spent nuclear fuel and high-level radioactive waste. Ground water is considered to be the principal means for transporting radionuclides that may be released from the potential repository to the accessible environment, thereby possibly affecting public health and safety. The ground-water hydrology of the region is a result of both the arid climatic conditions and the complex geology. Ground-water flow in the Yucca Mountain region generally can be described as consisting of two main components: a series of relatively shallow and localized flow paths that are superimposed on deeper regional flow paths. A significant component of the regional ground-water flow is through a thick, generally deep-lying, Paleozoic carbonate rock sequence. Locally within the potential repository area, the flow is through a vertical sequence of welded and nonwelded tuffs that overlie the carbonate aquifer. Downgradient from the site, these tuffs terminate in basin fill deposits that are dominated by alluvium. Throughout the system, extensive and prevalent faults and fractures may control ground-water flow. The purpose of this Analysis/Modeling Report (AMR) is to document the three-dimensional (3D) hydrogeologic framework model (HFM) that has been constructed specifically to support development of a site-scale ground-water flow and transport model. Because the HFM provides the fundamental geometric framework for constructing the site-scale 3D ground-water flow model that will be used to evaluate potential radionuclide transport through the saturated zone (SZ) from beneath the potential repository to down-gradient compliance points, the HFM is important for assessing potential repository system performance. This AMR documents the progress of the understanding of the site-scale SZ ground-water flow system framework at Yucca Mountain based on data through July 1999. The

  3. Using resource graphs to represent conceptual change

    Directory of Open Access Journals (Sweden)

    Michael C. Wittmann

    2006-08-01

    Full Text Available We introduce resource graphs, a representation of linked ideas used when reasoning about specific contexts in physics. Our model is consistent with previous descriptions of coordination classes and resources. It represents mesoscopic scales that are neither knowledge-in-pieces nor large-scale concepts. We use resource graphs to describe several forms of conceptual change: incremental, cascade, wholesale, and dual construction. For each, we give evidence from the physics education research literature to show examples of each form of conceptual change. Where possible, we compare our representation to models used by other researchers. Building on our representation, we analyze another form of conceptual change, differentiation, and suggest several experimental studies that would help understand the differences between reform-based curricula.

  4. Auditory function in the Tc1 mouse model of down syndrome suggests a limited region of human chromosome 21 involved in otitis media.

    Directory of Open Access Journals (Sweden)

    Stephanie Kuhn

    Full Text Available Down syndrome is one of the most common congenital disorders leading to a wide range of health problems in humans, including frequent otitis media. The Tc1 mouse carries a significant part of human chromosome 21 (Hsa21 in addition to the full set of mouse chromosomes and shares many phenotypes observed in humans affected by Down syndrome with trisomy of chromosome 21. However, it is unknown whether Tc1 mice exhibit a hearing phenotype and might thus represent a good model for understanding the hearing loss that is common in Down syndrome. In this study we carried out a structural and functional assessment of hearing in Tc1 mice. Auditory brainstem response (ABR measurements in Tc1 mice showed normal thresholds compared to littermate controls and ABR waveform latencies and amplitudes were equivalent to controls. The gross anatomy of the middle and inner ears was also similar between Tc1 and control mice. The physiological properties of cochlear sensory receptors (inner and outer hair cells: IHCs and OHCs were investigated using single-cell patch clamp recordings from the acutely dissected cochleae. Adult Tc1 IHCs exhibited normal resting membrane potentials and expressed all K(+ currents characteristic of control hair cells. However, the size of the large conductance (BK Ca(2+ activated K(+ current (I(K,f, which enables rapid voltage responses essential for accurate sound encoding, was increased in Tc1 IHCs. All physiological properties investigated in OHCs were indistinguishable between the two genotypes. The normal functional hearing and the gross structural anatomy of the middle and inner ears in the Tc1 mouse contrast to that observed in the Ts65Dn model of Down syndrome which shows otitis media. Genes that are trisomic in Ts65Dn but disomic in Tc1 may predispose to otitis media when an additional copy is active.

  5. Comparative performance of different scale-down simulators of substrate gradients in Penicillium chrysogenum cultures: the need of a biological systems response analysis.

    Science.gov (United States)

    Wang, Guan; Zhao, Junfei; Haringa, Cees; Tang, Wenjun; Xia, Jianye; Chu, Ju; Zhuang, Yingping; Zhang, Siliang; Deshmukh, Amit T; van Gulik, Walter; Heijnen, Joseph J; Noorman, Henk J

    2018-05-01

    In a 54 m 3 large-scale penicillin fermentor, the cells experience substrate gradient cycles at the timescales of global mixing time about 20-40 s. Here, we used an intermittent feeding regime (IFR) and a two-compartment reactor (TCR) to mimic these substrate gradients at laboratory-scale continuous cultures. The IFR was applied to simulate substrate dynamics experienced by the cells at full scale at timescales of tens of seconds to minutes (30 s, 3 min and 6 min), while the TCR was designed to simulate substrate gradients at an applied mean residence time (τc) of 6 min. A biological systems analysis of the response of an industrial high-yielding P. chrysogenum strain has been performed in these continuous cultures. Compared to an undisturbed continuous feeding regime in a single reactor, the penicillin productivity (q PenG ) was reduced in all scale-down simulators. The dynamic metabolomics data indicated that in the IFRs, the cells accumulated high levels of the central metabolites during the feast phase to actively cope with external substrate deprivation during the famine phase. In contrast, in the TCR system, the storage pool (e.g. mannitol and arabitol) constituted a large contribution of carbon supply in the non-feed compartment. Further, transcript analysis revealed that all scale-down simulators gave different expression levels of the glucose/hexose transporter genes and the penicillin gene clusters. The results showed that q PenG did not correlate well with exposure to the substrate regimes (excess, limitation and starvation), but there was a clear inverse relation between q PenG and the intracellular glucose level. © 2018 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  6. Thermo-mechanical behaviour modelling of particle fuels using a multi-scale approach

    International Nuclear Information System (INIS)

    Blanc, V.

    2009-12-01

    Particle fuels are made of a few thousand spheres, one millimeter diameter large, compound of uranium oxide coated by confinement layers which are embedded in a graphite matrix to form the fuel element. The aim of this study is to develop a new simulation tool for thermo-mechanical behaviour of those fuels under radiations which is able to predict finely local loadings on the particles. We choose to use the square finite element method, in which two different discretization scales are used: a macroscopic homogeneous structure whose properties in each integration point are computed on a second heterogeneous microstructure, the Representative Volume Element (RVE). First part of this works is concerned by the definition of this RVE. A morphological indicator based in the minimal distance between spheres centers permit to select random sets of microstructures. The elastic macroscopic response of RVE, computed by finite element has been compared to an analytical model. Thermal and mechanical representativeness indicators of local loadings has been built from the particle failure modes. A statistical study of those criteria on a hundred of RVE showed the significance of choose a representative microstructure. In this perspective, a empirical model binding morphological indicator to mechanical indicator has been developed. Second part of the work deals with the two transition scale method which are based on the periodic homogenization. Considering a linear thermal problem with heat source in permanent condition, one showed that the heterogeneity of the heat source involve to use a second order method to localized finely the thermal field. The mechanical non-linear problem has been treats by using the iterative Cast3M algorithm, substituting to integration of the behavior law a finite element computation on the RVE. This algorithm has been validated, and coupled with thermal resolution in order to compute a radiation loading. A computation on a complete fuel element

  7. Spatial Variability in Column CO2 Inferred from High Resolution GEOS-5 Global Model Simulations: Implications for Remote Sensing and Inversions

    Science.gov (United States)

    Ott, L.; Putman, B.; Collatz, J.; Gregg, W.

    2012-01-01

    Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement

  8. Rotor scale model tests for power conversion unit of GT-MHR

    Energy Technology Data Exchange (ETDEWEB)

    Baxi, C.B., E-mail: baxicb1130@hotmail.com [General Atomics, P.O. Box 85608, San Diego, CA 92186-5608 (United States); Telengator, A.; Razvi, J. [General Atomics, P.O. Box 85608, San Diego, CA 92186-5608 (United States)

    2012-10-15

    The gas turbine modular helium reactor (GT-MHR) combines a modular high-temperature gas-cooled reactor (HTGR) nuclear heat source with a closed Brayton gas-turbine cycle power conversion unit (PCU) for thermal to electric energy conversion. The PCU has a vertical orientation and is supported on electromagnetic bearings (EMB). The rotor scale model (RSM) tests are intended to directly model the control of EMB and rotor dynamic characteristics of the full-scale GT-MHR turbo-machine (TM). The objectives of the RSM tests are to: Bullet Confirm the EMB control system design for the GT-MHR turbo machine over the full-range of operation. Bullet Confirm the redundancy and on-line maintainability features that have been specified for the EMBs. Bullet Provide a benchmark for validation of analytical tools that will be used for independent analyses of the EMB subsystem design. Bullet Provide experience with the installation, operation and maintenance of EMBs supporting multiple rotors with flexible couplings. As with the full-scale TM, the RSM incorporates two rotors that are joined by a flexible coupling. Each of the rotors is supported on one axial and two radial EMBs. Additional devices, similar in concept to radial EMBs, are installed to simulate magnetic and/or mechanical forces representing those that would be seen by the exciter, generator, compressors and turbine. Overall, the lengths of the RSM rotor is about 1/3rd that of the full-scale TM, while the diameters are approximately 1/5th scale. The design and sizing of the rotor is such that the number and values of critical speeds in the RSM are the same as in the full-scale TM. The EMBs are designed such that their response to rotor dynamic forces is representative of the full-scale TM. The fabrication and assembly of the RSM was completed at the end of 2008. All start up adjustments were finished in December 2009. To-date the generator rotor has been supported in the EMBs and rotated up to 1800 rpm. Final tests are

  9. Pulsar slow-down epochs

    International Nuclear Information System (INIS)

    Heintzmann, H.; Novello, M.

    1981-01-01

    The relative importance of magnetospheric currents and low frequency waves for pulsar braking is assessed and a model is developed which tries to account for the available pulsar timing data under the unifying aspect that all pulsars have equal masses and magnetic moments and are born as rapid rotators. Four epochs of slow-down are distinguished which are dominated by different braking mechanisms. According to the model no direct relationship exists between 'slow-down age' and true age of a pulsar and leads to a pulsar birth-rate of one event per hundred years. (Author) [pt

  10. Feedforward and feedback frequency-dependent interactions in a large-scale laminar network of the primate cortex.

    Science.gov (United States)

    Mejias, Jorge F; Murray, John D; Kennedy, Henry; Wang, Xiao-Jing

    2016-11-01

    Interactions between top-down and bottom-up processes in the cerebral cortex hold the key to understanding attentional processes, predictive coding, executive control, and a gamut of other brain functions. However, the underlying circuit mechanism remains poorly understood and represents a major challenge in neuroscience. We approached this problem using a large-scale computational model of the primate cortex constrained by new directed and weighted connectivity data. In our model, the interplay between feedforward and feedback signaling depends on the cortical laminar structure and involves complex dynamics across multiple (intralaminar, interlaminar, interareal, and whole cortex) scales. The model was tested by reproducing, as well as providing insights into, a wide range of neurophysiological findings about frequency-dependent interactions between visual cortical areas, including the observation that feedforward pathways are associated with enhanced gamma (30 to 70 Hz) oscillations, whereas feedback projections selectively modulate alpha/low-beta (8 to 15 Hz) oscillations. Furthermore, the model reproduces a functional hierarchy based on frequency-dependent Granger causality analysis of interareal signaling, as reported in recent monkey and human experiments, and suggests a mechanism for the observed context-dependent hierarchy dynamics. Together, this work highlights the necessity of multiscale approaches and provides a modeling platform for studies of large-scale brain circuit dynamics and functions.

  11. Models of Small-Scale Patchiness

    Science.gov (United States)

    McGillicuddy, D. J.

    2001-01-01

    Patchiness is perhaps the most salient characteristic of plankton populations in the ocean. The scale of this heterogeneity spans many orders of magnitude in its spatial extent, ranging from planetary down to microscale. It has been argued that patchiness plays a fundamental role in the functioning of marine ecosystems, insofar as the mean conditions may not reflect the environment to which organisms are adapted. Understanding the nature of this patchiness is thus one of the major challenges of oceanographic ecology. The patchiness problem is fundamentally one of physical-biological-chemical interactions. This interconnection arises from three basic sources: (1) ocean currents continually redistribute dissolved and suspended constituents by advection; (2) space-time fluctuations in the flows themselves impact biological and chemical processes, and (3) organisms are capable of directed motion through the water. This tripartite linkage poses a difficult challenge to understanding oceanic ecosystems: differentiation between the three sources of variability requires accurate assessment of property distributions in space and time, in addition to detailed knowledge of organismal repertoires and the processes by which ambient conditions control the rates of biological and chemical reactions. Various methods of observing the ocean tend to lie parallel to the axes of the space/time domain in which these physical-biological-chemical interactions take place. Given that a purely observational approach to the patchiness problem is not tractable with finite resources, the coupling of models with observations offers an alternative which provides a context for synthesis of sparse data with articulations of fundamental principles assumed to govern functionality of the system. In a sense, models can be used to fill the gaps in the space/time domain, yielding a framework for exploring the controls on spatially and temporally intermittent processes. The following discussion highlights

  12. One-scale supersymmetric inflationary models

    International Nuclear Information System (INIS)

    Bertolami, O.; Ross, G.G.

    1986-01-01

    The reheating phase is studied in a class of supergravity inflationary models involving a two-component hidden sector in which the scale of supersymmetry breaking and the scale generating inflation are related. It is shown that these models have an ''entropy crisis'' in which there is a large entropy release after nucleosynthesis leading to unacceptable low nuclear abundances. (orig.)

  13. Discriminative phenomenological features of scale invariant models for electroweak symmetry breaking

    Directory of Open Access Journals (Sweden)

    Katsuya Hashino

    2016-01-01

    Full Text Available Classical scale invariance (CSI may be one of the solutions for the hierarchy problem. Realistic models for electroweak symmetry breaking based on CSI require extended scalar sectors without mass terms, and the electroweak symmetry is broken dynamically at the quantum level by the Coleman–Weinberg mechanism. We discuss discriminative features of these models. First, using the experimental value of the mass of the discovered Higgs boson h(125, we obtain an upper bound on the mass of the lightest additional scalar boson (≃543 GeV, which does not depend on its isospin and hypercharge. Second, a discriminative prediction on the Higgs-photon–photon coupling is given as a function of the number of charged scalar bosons, by which we can narrow down possible models using current and future data for the di-photon decay of h(125. Finally, for the triple Higgs boson coupling a large deviation (∼+70% from the SM prediction is universally predicted, which is independent of masses, quantum numbers and even the number of additional scalars. These models based on CSI can be well tested at LHC Run II and at future lepton colliders.

  14. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...

  15. Use of cooling tower blow down in ethanol fermentation.

    Science.gov (United States)

    Rajagopalan, N; Singh, V; Panno, B; Wilcoxon, M

    2010-01-01

    Reducing water consumption in bioethanol production conserves an increasingly scarce natural resource, lowers production costs, and minimizes effluent management issues. The suitability of cooling tower blow down water for reuse in fermentation was investigated as a means to lower water consumption. Extensive chemical characterization of the blow down water revealed low concentrations of toxic elements and total dissolved solids. Fermentation carried out with cooling tower blow down water resulted in similar levels of ethanol and residual glucose as a control study using deionized water. The study noted good tolerance by yeast to the specific scale and corrosion inhibitors found in the cooling tower blow down water. This research indicates that, under appropriate conditions, reuse of blow down water from cooling towers in fermentation is feasible.

  16. Unitarity bounds on low scale quantum gravity

    International Nuclear Information System (INIS)

    Atkins, Michael; Calmet, Xavier

    2010-01-01

    We study the unitarity of models with low scale quantum gravity both in four dimensions and in models with a large extra-dimensional volume. We find that models with low scale quantum gravity have problems with unitarity below the scale at which gravity becomes strong. An important consequence of our work is that their first signal at the Large Hadron Collider would not be of a gravitational nature such as graviton emission or small black holes, but rather would be linked to the mechanism which fixes the unitarity problem. We also study models with scalar fields with non-minimal couplings to the Ricci scalar. We consider the strength of gravity in these models and study the consequences for inflation models with non-minimally coupled scalar fields. We show that a single scalar field with a large non-minimal coupling can lower the Planck mass in the TeV region. In that model, it is possible to lower the scale at which gravity becomes strong down to 14 TeV without violating unitarity below that scale. (orig.)

  17. Coupling scales for modelling heavy metal vaporization from municipal solid waste incineration in a fluid bed by CFD

    Energy Technology Data Exchange (ETDEWEB)

    Soria, José, E-mail: jose.soria@probien.gob.ar [Institute for Research and Development in Process Engineering, Biotechnology and Alternative Energies (PROBIEN, CONICET – UNCo), 1400 Buenos Aires St., 8300 Neuquén (Argentina); Gauthier, Daniel; Flamant, Gilles [Processes, Materials and Solar Energy Laboratory (PROMES-CNRS, UPR 8521), 7 Four Solaire Street, Odeillo, 66120 Font-Romeu (France); Rodriguez, Rosa [Chemical Engineering Institute, National University of San Juan, 1109 Libertador (O) Avenue, 5400 San Juan (Argentina); Mazza, Germán [Institute for Research and Development in Process Engineering, Biotechnology and Alternative Energies (PROBIEN, CONICET – UNCo), 1400 Buenos Aires St., 8300 Neuquén (Argentina)

    2015-09-15

    Highlights: • A CFD two-scale model is formulated to simulate heavy metal vaporization from waste incineration in fluidized beds. • MSW particle is modelled with the macroscopic particle model. • Influence of bed dynamics on HM vaporization is included. • CFD predicted results agree well with experimental data reported in literature. • This approach may be helpful for fluidized bed reactor modelling purposes. - Abstract: Municipal Solid Waste Incineration (MSWI) in fluidized bed is a very interesting technology mainly due to high combustion efficiency, great flexibility for treating several types of waste fuels and reduction in pollutants emitted with the flue gas. However, there is a great concern with respect to the fate of heavy metals (HM) contained in MSW and their environmental impact. In this study, a coupled two-scale CFD model was developed for MSWI in a bubbling fluidized bed. It presents an original scheme that combines a single particle model and a global fluidized bed model in order to represent the HM vaporization during MSW combustion. Two of the most representative HM (Cd and Pb) with bed temperatures ranging between 923 and 1073 K have been considered. This new approach uses ANSYS FLUENT 14.0 as the modelling platform for the simulations along with a complete set of self-developed user-defined functions (UDFs). The simulation results are compared to the experimental data obtained previously by the research group in a lab-scale fluid bed incinerator. The comparison indicates that the proposed CFD model predicts well the evolution of the HM release for the bed temperatures analyzed. It shows that both bed temperature and bed dynamics have influence on the HM vaporization rate. It can be concluded that CFD is a rigorous tool that provides valuable information about HM vaporization and that the original two-scale simulation scheme adopted allows to better represent the actual particle behavior in a fluid bed incinerator.

  18. Coupling scales for modelling heavy metal vaporization from municipal solid waste incineration in a fluid bed by CFD

    International Nuclear Information System (INIS)

    Soria, José; Gauthier, Daniel; Flamant, Gilles; Rodriguez, Rosa; Mazza, Germán

    2015-01-01

    Highlights: • A CFD two-scale model is formulated to simulate heavy metal vaporization from waste incineration in fluidized beds. • MSW particle is modelled with the macroscopic particle model. • Influence of bed dynamics on HM vaporization is included. • CFD predicted results agree well with experimental data reported in literature. • This approach may be helpful for fluidized bed reactor modelling purposes. - Abstract: Municipal Solid Waste Incineration (MSWI) in fluidized bed is a very interesting technology mainly due to high combustion efficiency, great flexibility for treating several types of waste fuels and reduction in pollutants emitted with the flue gas. However, there is a great concern with respect to the fate of heavy metals (HM) contained in MSW and their environmental impact. In this study, a coupled two-scale CFD model was developed for MSWI in a bubbling fluidized bed. It presents an original scheme that combines a single particle model and a global fluidized bed model in order to represent the HM vaporization during MSW combustion. Two of the most representative HM (Cd and Pb) with bed temperatures ranging between 923 and 1073 K have been considered. This new approach uses ANSYS FLUENT 14.0 as the modelling platform for the simulations along with a complete set of self-developed user-defined functions (UDFs). The simulation results are compared to the experimental data obtained previously by the research group in a lab-scale fluid bed incinerator. The comparison indicates that the proposed CFD model predicts well the evolution of the HM release for the bed temperatures analyzed. It shows that both bed temperature and bed dynamics have influence on the HM vaporization rate. It can be concluded that CFD is a rigorous tool that provides valuable information about HM vaporization and that the original two-scale simulation scheme adopted allows to better represent the actual particle behavior in a fluid bed incinerator

  19. Model parameters for representative wetland plant functional groups

    Science.gov (United States)

    Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.

    2017-01-01

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in

  20. Development of a Representative Mouse Model with Nonalcoholic Steatohepatitis.

    Science.gov (United States)

    Verbeek, Jef; Jacobs, Ans; Spincemaille, Pieter; Cassiman, David

    2016-06-01

    Non-alcoholic fatty liver disease (NAFLD) is the most prevalent liver disease in the Western world. It represents a disease spectrum ranging from isolated steatosis to non-alcoholic steatohepatitis (NASH). In particular, NASH can evolve to fibrosis, cirrhosis, hepatocellular carcinoma, and liver failure. The development of novel treatment strategies is hampered by the lack of representative NASH mouse models. Here, we describe a NASH mouse model, which is based on feeding non-genetically manipulated C57BL6/J mice a 'Western style' high-fat/high-sucrose diet (HF-HSD). HF-HSD leads to early obesity, insulin resistance, and hypercholesterolemia. After 12 weeks of HF-HSD, all mice exhibit the complete spectrum of features of NASH, including steatosis, hepatocyte ballooning, and lobular inflammation, together with fibrosis in the majority of mice. Hence, this model closely mimics the human disease. Implementation of this mouse model will lead to a standardized setup for the evaluation of (i) underlying mechanisms that contribute to the progression of NAFLD to NASH, and (ii) therapeutic interventions for NASH. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  1. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  2. Network model of top-down influences on local gain and contextual interactions in visual cortex.

    Science.gov (United States)

    Piëch, Valentin; Li, Wu; Reeke, George N; Gilbert, Charles D

    2013-10-22

    The visual system uses continuity as a cue for grouping oriented line segments that define object boundaries in complex visual scenes. Many studies support the idea that long-range intrinsic horizontal connections in early visual cortex contribute to this grouping. Top-down influences in primary visual cortex (V1) play an important role in the processes of contour integration and perceptual saliency, with contour-related responses being task dependent. This suggests an interaction between recurrent inputs to V1 and intrinsic connections within V1 that enables V1 neurons to respond differently under different conditions. We created a network model that simulates parametrically the control of local gain by hypothetical top-down modification of local recurrence. These local gain changes, as a consequence of network dynamics in our model, enable modulation of contextual interactions in a task-dependent manner. Our model displays contour-related facilitation of neuronal responses and differential foreground vs. background responses over the neuronal ensemble, accounting for the perceptual pop-out of salient contours. It quantitatively reproduces the results of single-unit recording experiments in V1, highlighting salient contours and replicating the time course of contextual influences. We show by means of phase-plane analysis that the model operates stably even in the presence of large inputs. Our model shows how a simple form of top-down modulation of the effective connectivity of intrinsic cortical connections among biophysically realistic neurons can account for some of the response changes seen in perceptual learning and task switching.

  3. Satisfaction with life scale in a representative sample of Spanish adults: validation and normative data.

    Science.gov (United States)

    Vázquez, Carmelo; Duque, Almudena; Hervás, Gonzalo

    2013-01-01

    The Satisfaction with Life Scale (SWLS) is a measure widely used to assess life satisfaction. This paper aims to test its psychometric properties, factor structure, and distribution scores across age, gender, education, and employment status. For this purpose, a representative sample of the Spanish population (N = 2,964) was used. Although analyses showed no significant differences across age or gender, participants with higher education level and those who held an occupation were more satisfied with their lives. Confirmatory factor analysis revealed a unifactorial structure with significant correlations between the SWLS, and subjective happiness and social support. The internal consistency of the scale was .88. Thus, our results indicate that the Spanish version of the SWLS is a valid and reliable measure of life satisfaction within the Spanish context.

  4. Explicitly represented polygon wall boundary model for the explicit MPS method

    Science.gov (United States)

    Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori

    2015-05-01

    This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.

  5. Towards integrated modelling of soil organic carbon cycling at landscape scale

    Science.gov (United States)

    Viaud, V.

    2009-04-01

    Soil organic carbon (SOC) is recognized as a key factor of the chemical, biological and physical quality of soil. Numerous models of soil organic matter turnover have been developed since the 1930ies, most of them dedicated to plot scale applications. More recently, they have been applied to national scales to establish the inventories of carbon stocks directed by the Kyoto protocol. However, only few studies consider the intermediate landscape scale, where the spatio-temporal pattern of land management practices, its interactions with the physical environment and its impacts on SOC dynamics can be investigated to provide guidelines for sustainable management of soils in agricultural areas. Modelling SOC cycling at this scale requires accessing accurate spatially explicit input data on soils (SOC content, bulk density, depth, texture) and land use (land cover, farm practices), and combining both data in a relevant integrated landscape representation. The purpose of this paper is to present a first approach to modelling SOC evolution in a small catchment. The impact of the way landscape is represented on SOC stocks in the catchment was more specifically addressed. This study was based on the field map, the soil survey, the crop rotations and land management practices of an actual 10-km² agricultural catchment located in Brittany (France). RothC model was used to drive soil organic matter dynamics. Landscape representation in the form of a systematic regular grid, where driving properties vary continuously in space, was compared to a representation where landscape is subdivided into a set of homogeneous geographical units. This preliminary work enabled to identify future needs to improve integrated soil-landscape modelling in agricultural areas.

  6. Proportion-corrected scaled voxel models for Japanese children and their application to the numerical dosimetry of specific absorption rate for frequencies from 30 MHz to 3 GHz

    International Nuclear Information System (INIS)

    Nagaoka, Tomoaki; Watanabe, Soichi; Kunieda, Etsuo

    2008-01-01

    The development of high-resolution anatomical voxel models of children is difficult given, inter alia, the ethical limitations on subjecting children to medical imaging. We instead used an existing voxel model of a Japanese adult and three-dimensional deformation to develop three voxel models that match the average body proportions of Japanese children at 3, 5 and 7 years old. The adult model was deformed to match the proportions of a child by using the measured dimensions of various body parts of children at 3, 5 and 7 years old and a free-form deformation technique. The three developed models represent average-size Japanese children of the respective ages. They consist of cubic voxels (2 mm on each side) and are segmented into 51 tissues and organs. We calculated the whole-body-averaged specific absorption rates (WBA-SARs) and tissue-averaged SARs for the child models for exposures to plane waves from 30 MHz to 3 GHz; these results were then compared with those for scaled down adult models. We also determined the incident electric-field strength required to produce the exposure equivalent to the ICNIRP basic restriction for general public exposure, i.e., a WBA-SAR of 0.08 W kg -1 .

  7. Have East Asian stock markets calmed down? Evidence from a regime-switching model

    NARCIS (Netherlands)

    Chaudhuri, K.R.; Klaassen, F.

    2001-01-01

    The 1997-98 East Asian crisis was accompanied by high volatility of East Asian stock returns. This paper examines whether the volatility has already come down to the level of the years before the crisis. We use a regime-switching model to account for possible structural change in the unconditional

  8. Drift-Scale THC Seepage Model

    International Nuclear Information System (INIS)

    C.R. Bryan

    2005-01-01

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC submodel uses a drift-scale

  9. Multi-scale modeling for sustainable chemical production.

    Science.gov (United States)

    Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J

    2013-09-01

    With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Multi-Scale Analysis for Characterizing Near-Field Constituent Concentrations in the Context of a Macro-Scale Semi-Lagrangian Numerical Model

    Science.gov (United States)

    Yearsley, J. R.

    2017-12-01

    The semi-Lagrangian numerical scheme employed by RBM, a model for simulating time-dependent, one-dimensional water quality constituents in advection-dominated rivers, is highly scalable both in time and space. Although the model has been used at length scales of 150 meters and time scales of three hours, the majority of applications have been at length scales of 1/16th degree latitude/longitude (about 5 km) or greater and time scales of one day. Applications of the method at these scales has proven successful for characterizing the impacts of climate change on water temperatures in global rivers and on the vulnerability of thermoelectric power plants to changes in cooling water temperatures in large river systems. However, local effects can be very important in terms of ecosystem impacts, particularly in the case of developing mixing zones for wastewater discharges with pollutant loadings limited by regulations imposed by the Federal Water Pollution Control Act (FWPCA). Mixing zone analyses have usually been decoupled from large-scale watershed influences by developing scenarios that represent critical scenarios for external processes associated with streamflow and weather conditions . By taking advantage of the particle-tracking characteristics of the numerical scheme, RBM can provide results at any point in time within the model domain. We develop a proof of concept for locations in the river network where local impacts such as mixing zones may be important. Simulated results from the semi-Lagrangian numerical scheme are treated as input to a finite difference model of the two-dimensional diffusion equation for water quality constituents such as water temperature or toxic substances. Simulations will provide time-dependent, two-dimensional constituent concentration in the near-field in response to long-term basin-wide processes. These results could provide decision support to water quality managers for evaluating mixing zone characteristics.

  11. The use of soil moisture - remote sensing products for large-scale groundwater modeling and assessment

    NARCIS (Netherlands)

    Sutanudjaja, E.H.

    2012-01-01

    In this thesis, the possibilities of using spaceborne remote sensing for large-scale groundwater modeling are explored. We focus on a soil moisture product called European Remote Sensing Soil Water Index (ERS SWI, Wagner et al., 1999) - representing the upper profile soil moisture. As a test-bed, we

  12. Theory of two-photon interactions with broadband down-converted light and entangled photons

    International Nuclear Information System (INIS)

    Dayan, Barak

    2007-01-01

    When two-photon interactions are induced by down-converted light with a bandwidth that exceeds the pump bandwidth, they can obtain a behavior that is pulselike temporally, yet spectrally narrow. At low photon fluxes this behavior reflects the time and energy entanglement between the down-converted photons. However, two-photon interactions such as two-photon absorption (TPA) and sum-frequency generation (SFG) can exhibit such a behavior even at high power levels, as long as the final state (i.e., the atomic level in TPA, or the generated light in SFG) is narrow-band enough. This behavior does not depend on the squeezing properties of the light, is insensitive to linear losses, and has potential applications. In this paper we describe analytically this behavior for traveling-wave down conversion with continuous or pulsed pumping, both for high- and low-power regimes. For this we derive a quantum-mechanical expression for the down-converted amplitude generated by an arbitrary pump, and formulate operators that represent various two-photon interactions induced by broadband light. This model is in excellent agreement with experimental results of TPA and SFG with high-power down-converted light and with entangled photons [Dayan et al., Phys. Rev. Lett. 93, 023005 (2004); 94, 043602 (2005); Pe'er et al., ibid. 94, 073601 (2005)

  13. Fermion loops in the effective potential of N = 1 supergravity, with application to no-scale models

    International Nuclear Information System (INIS)

    Burton, J.W.

    1990-01-01

    Powerful and quite general arguments suggest that N = 1 supergravity, and in particular the superstring-inspired no-scale models, may describe the physics of the four-dimensional vacuum at energy densities below the Planck scale. These models are not renormalizable, since they arise as effective theories after the large masses have been integrated out of the fundamental theory; thus, they have divergences in their loop amplitudes that must be regulated by imposing a cutoff. Before physics at experimental energies can be extracted from these models, the true vacuum state or states must be identified: at tree level, the ground states of the effective theories are highly degenerate. Radiative corrections at the one-loop level have been shown to break the degeneracy sufficiently to identify the states of vanishing vacuum energy. As the concluding step in a program to calculate these corrections within a self-consistent cutoff prescription, all fermionic one-loop divergent corrections to the scalar effective potential are evaluated. (The corresponding bosonic contributions have been found elsewhere.) The total effective scalar Lagrange density for N = 1 supergravity is written down, and comments are made about cancellations between the fermionic and bosonic loops. Finally, the result is specialized to a toy no-scale model with a single generation of matter fields, and prospects for eventual phenomenological constraints on theories of this type are briefly discussed. 48 refs

  14. Effects of head down tilt on episcleral venous pressure in a rabbit model.

    Science.gov (United States)

    Lavery, W J; Kiel, J W

    2013-06-01

    In humans, changing from upright to supine elicits an approximately 10 mmHg increase in cephalic venous pressure caused by the hydrostatic column effect, but episcleral venous pressure (EVP) and intraocular pressure (IOP) rise by only a few mmHg. The dissociation of the small increases in IOP and EVP compared to the larger increase in cephalic venous pressure suggests a regulatory mechanism controlling EVP. The aim of the present study was to determine if the rabbit model is suitable to study the effects of postural changes on EVP despite its short hydrostatic column. In anesthetized rabbits (n = 43), we measured arterial pressure (AP), IOP, and orbital venous pressure (OVP) by direct cannulation; carotid blood flow (BFcar) by transit time ultrasound, heart rate (HR) by digital cardiotachometer, and EVP with a servonull micropressure system. The goal of the protocol was to obtain measurement of supine EVP for ≈10 min, followed by ≈10 min of EVP measurement with the rabbit in a head down tilt. The data were analyzed by paired t-tests and the results reported as the mean ± standard error of the mean. In a separate group of animals (n = 35), aqueous flow was measured by fluorophotometry. This protocol entailed measurement of aqueous flow in the supine position for ≈60 min, followed by ≈60 min of aqueous flow measurement with the rabbit in a head down tilt. From supine to head down tilt, AP and BFcar were unchanged, IOP increased by 2.3 ± 0.4 mmHg (p measurements of the pressures and systemic parameters likely involved in the EVP responses to posture change. The present results indicate directionally similar EVP and IOP responses to tilt as occur in humans and, as in humans, the responses are smaller than would be expected from the change in the hydrostatic column height. Also, as in humans, the model reveals no change in aqueous flow during head down tilt. We conclude the rabbit model is appropriate for studying the mechanisms responsible for the relative

  15. Coupling scales for modelling heavy metal vaporization from municipal solid waste incineration in a fluid bed by CFD.

    Science.gov (United States)

    Soria, José; Gauthier, Daniel; Flamant, Gilles; Rodriguez, Rosa; Mazza, Germán

    2015-09-01

    Municipal Solid Waste Incineration (MSWI) in fluidized bed is a very interesting technology mainly due to high combustion efficiency, great flexibility for treating several types of waste fuels and reduction in pollutants emitted with the flue gas. However, there is a great concern with respect to the fate of heavy metals (HM) contained in MSW and their environmental impact. In this study, a coupled two-scale CFD model was developed for MSWI in a bubbling fluidized bed. It presents an original scheme that combines a single particle model and a global fluidized bed model in order to represent the HM vaporization during MSW combustion. Two of the most representative HM (Cd and Pb) with bed temperatures ranging between 923 and 1073K have been considered. This new approach uses ANSYS FLUENT 14.0 as the modelling platform for the simulations along with a complete set of self-developed user-defined functions (UDFs). The simulation results are compared to the experimental data obtained previously by the research group in a lab-scale fluid bed incinerator. The comparison indicates that the proposed CFD model predicts well the evolution of the HM release for the bed temperatures analyzed. It shows that both bed temperature and bed dynamics have influence on the HM vaporization rate. It can be concluded that CFD is a rigorous tool that provides valuable information about HM vaporization and that the original two-scale simulation scheme adopted allows to better represent the actual particle behavior in a fluid bed incinerator. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Computational Fluid Dynamics Study on the Effects of RATO Timing on the Scale Model Acoustic Test

    Science.gov (United States)

    Nielsen, Tanner; Williams, B.; West, Jeff

    2015-01-01

    The Scale Model Acoustic Test (SMAT) is a 5% scale test of the Space Launch System (SLS), which is currently being designed at Marshall Space Flight Center (MSFC). The purpose of this test is to characterize and understand a variety of acoustic phenomena that occur during the early portions of lift off, one being the overpressure environment that develops shortly after booster ignition. The SLS lift off configuration consists of four RS-25 liquid thrusters on the core stage, with two solid boosters connected to each side. Past experience with scale model testing at MSFC (in ER42), has shown that there is a delay in the ignition of the Rocket Assisted Take Off (RATO) motor, which is used as the 5% scale analog of the solid boosters, after the signal to ignite is given. This delay can range from 0 to 16.5ms. While this small of a delay maybe insignificant in the case of the full scale SLS, it can significantly alter the data obtained during the SMAT due to the much smaller geometry. The speed of sound of the air and combustion gas constituents is not scaled, and therefore the SMAT pressure waves propagate at approximately the same speed as occurs during full scale. However, the SMAT geometry is much smaller allowing the pressure waves to move down the exhaust duct, through the trench, and impact the vehicle model much faster than occurs at full scale. To better understand the effect of the RATO timing simultaneity on the SMAT IOP test data, a computational fluid dynamics (CFD) analysis was performed using the Loci/CHEM CFD software program. Five different timing offsets, based on RATO ignition delay statistics, were simulated. A variety of results and comparisons will be given, assessing the overall effect of RATO timing simultaneity on the SMAT overpressure environment.

  17. Scale modelling in LMFBR safety

    International Nuclear Information System (INIS)

    Cagliostro, D.J.; Florence, A.L.; Abrahamson, G.R.

    1979-01-01

    This paper reviews scale modelling techniques used in studying the structural response of LMFBR vessels to HCDA loads. The geometric, material, and dynamic similarity parameters are presented and identified using the methods of dimensional analysis. Complete similarity of the structural response requires that each similarity parameter be the same in the model as in the prototype. The paper then focuses on the methods, limitations, and problems of duplicating these parameters in scale models and mentions an experimental technique for verifying the scaling. Geometric similarity requires that all linear dimensions of the prototype be reduced in proportion to the ratio of a characteristic dimension of the model to that of the prototype. The overall size of the model depends on the structural detail required, the size of instrumentation, and the costs of machining and assemblying the model. Material similarity requires that the ratio of the density, bulk modulus, and constitutive relations for the structure and fluid be the same in the model as in the prototype. A practical choice of a material for the model is one with the same density and stress-strain relationship as the operating temperature. Ni-200 and water are good simulant materials for the 304 SS vessel and the liquid sodium coolant, respectively. Scaling of the strain rate sensitivity and fracture toughness of materials is very difficult, but may not be required if these effects do not influence the structural response of the reactor components. Dynamic similarity requires that the characteristic pressure of a simulant source equal that of the prototype HCDA for geometrically similar volume changes. The energy source is calibrated in the geometry and environment in which it will be used to assure that heat transfer between high temperature loading sources and the coolant simulant and that non-equilibrium effects in two-phase sources are accounted for. For the geometry and flow conitions of interest, the

  18. Representing the environment 3.0. Maps, models, networks.

    Directory of Open Access Journals (Sweden)

    Letizia Bollini

    2014-05-01

    Full Text Available Web 3.0 is changing the world we live and perceive the environment anthropomorphized, making a stratifation of levels of experience and mediated by the devices. If the urban landscape is designed, shaped and planned space, there is a social landscape that overwrite the territory of values, representations shared images, narratives of personal and collective history. Mobile technology introduces an additional parameter, a kind of non-place, which allows the coexistence of the here and elsewhere in an sort of digital landscape. The maps, mental models, the system of social networks become, then, the way to present, represented and represent themselves in a kind of ideal coring of the co-presence of levels of physical, cognitive and collective space.

  19. Global scale groundwater flow model

    Science.gov (United States)

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  20. A top-down bottom-up modeling approach to climate change policy analysis

    International Nuclear Information System (INIS)

    Tuladhar, Sugandha D.; Yuan, Mei; Bernstein, Paul; Montgomery, W. David; Smith, Anne

    2009-01-01

    This paper analyzes macroeconomic impacts of U.S. climate change policies for three different emissions pathways using a top-down bottom-up integrated model. The integrated model couples a technology-rich, bottom-up model of the U.S. electricity sector with a fully dynamic, forward-looking general equilibrium model of the U.S. economy. Our model provides a unique and consistent modeling framework for climate change analysis. Because of the model's detail and flexibility, we use it to examine additional scenarios to analyze many of the major uncertainties surrounding the implementation and impact of climate change policies - the role of command-and-control measures, loss in flexibility mechanisms such as banking, limits on low-emitting technology, and availability of offsets. The results consistently demonstrate that those policies that combine market-oriented abatement incentives with full flexibility are the most cost-effective. (author)

  1. Site scale groundwater flow in Haestholmen

    International Nuclear Information System (INIS)

    Loefman, J.

    1999-05-01

    Groundwater flow modelling on the site scale has been an essential part of site investigation work carried out at different locations since 1986. The objective of the modelling has been to provide results that characterise the groundwater flow conditions deep in the bedrock. The main result quantities can be used for evaluation of the investigation sites and of the preconditions for safe final disposal - of spent nuclear fuel. This study represents the groundwater flow modelling at Haestholmen, and it comprises the transient flow analysis taking into account the effects of density variations and the repository as well as the post-glacial land uplift. The analysis is performed by means of numerical finite element simulation of coupled and transient groundwater flow and solute transport carried out up to 10000 years into the future. This work provides also the results for the site-specific data needs for the block scale groundwater flow modelling at Haestholmen. Conceptually the fractured bedrock is divided into hydraulic units: the planar fracture zones and the remaining part of the bedrock. The equivalent-continuum (EC) model is applied so that each hydraulic unit is treated as a homogeneous and isotropic continuum with representative average characteristics. All the fracture zones are modelled explicitly and represented by two-dimensional finite elements. A site-specific simulation model for groundwater flow and solute transport is developed on the basis of the latest hydrogeological and hydrogeochemical field investigations at Haestholmen. The present topography together with a mathematical model describing the land uplift at the Haestholmen area are employed as a boundary condition at the surface of the model. The overall flow pattern is mostly controlled by the local variations in the topography and by the highly transmissive fracture zones. Near the surface the flow spreads out to offshore and to the lower areas of topography in all directions away from

  2. A Global Data Analysis for Representing Sediment and Particulate Organic Carbon Yield in Earth System Models

    Science.gov (United States)

    Tan, Zeli; Leung, L. Ruby; Li, Hongyi; Tesfa, Teklu; Vanmaercke, Matthias; Poesen, Jean; Zhang, Xuesong; Lu, Hui; Hartmann, Jens

    2017-12-01

    Although sediment yield (SY) from water erosion is ubiquitous and its environmental consequences are well recognized, its impacts on the global carbon cycle remain largely uncertain. This knowledge gap is partly due to the lack of soil erosion modeling in Earth System Models (ESMs), which are important tools used to understand the global carbon cycle and explore its changes. This study analyzed sediment and particulate organic carbon yield (CY) data from 1,081 and 38 small catchments (0.1-200 km2), respectively, in different environments across the globe. Using multiple statistical analysis techniques, we explored environmental factors and hydrological processes important for SY and CY modeling in ESMs. Our results show clear correlations of high SY with traditional agriculture, seismicity and heavy storms, as well as strong correlations between SY and annual peak runoff. These highlight the potential limitation of SY models that represent only interrill and rill erosion because shallow overland flow and rill flow have limited transport capacity due to their hydraulic geometry to produce high SY. Further, our results suggest that SY modeling in ESMs should be implemented at the event scale to produce the catastrophic mass transport during episodic events. Several environmental factors such as seismicity and land management that are often not considered in current catchment-scale SY models can be important in controlling global SY. Our analyses show that SY is likely the primary control on CY in small catchments and a statistically significant empirical relationship is established to calculate SY and CY jointly in ESMs.

  3. A Global Data Analysis for Representing Sediment and Particulate Organic Carbon Yield in Earth System Models

    Energy Technology Data Exchange (ETDEWEB)

    Tan, Zeli [Pacific Northwest National Laboratory, Richland WA USA; Leung, L. Ruby [Pacific Northwest National Laboratory, Richland WA USA; Li, Hongyi [Montana State University, Bozeman MT USA; Tesfa, Teklu [Pacific Northwest National Laboratory, Richland WA USA; Vanmaercke, Matthias [Département de Géographie, Université de Liège, Liege Belgium; Poesen, Jean [Department of Earth and Environmental Sciences, Division of Geography, KU Leuven, Leuven Belgium; Zhang, Xuesong [Pacific Northwest National Laboratory, Richland WA USA; Lu, Hui [Ministry of Education Key Laboratory for Earth System Modeling, Department of Earth System Science, Tsinghua University, Beijing China; Hartmann, Jens [Institute for Geology, Center for Earth System Research and Sustainability, Universität Hamburg, Hamburg Germany

    2017-12-01

    Although sediment yield (SY) from water erosion is ubiquitous and its environmental consequences are well recognized, its impacts on the global carbon cycle remain largely uncertain. This knowledge gap is partly due to the lack of soil erosion modeling in Earth System Models (ESMs), which are important tools used to understand the global carbon cycle and explore its changes. This study analyzed sediment and particulate organic carbon yield (CY) data from 1081 and 38 small catchments (0.1-200 km27 ), respectively, in different environments across the globe. Using multiple statistical analysis techniques, we explored environmental factors and hydrological processes important for SY and CY modeling in ESMs. Our results show clear correlations of high SY with traditional agriculture, seismicity and heavy storms, as well as strong correlations between SY and annual peak runoff. These highlight the potential limitation of SY models that represent only interrill and rill erosion because shallow overland flow and rill flow have limited transport capacity due to their hydraulic geometry to produce high SY. Further, our results suggest that SY modeling in ESMs should be implemented at the event scale to produce the catastrophic mass transport during episodic events. Several environmental factors such as seismicity and land management that are often not considered in current catchment-scale SY models can be important in controlling global SY. Our analyses show that SY is likely the primary control on CY in small catchments and a statistically significant empirical relationship is established to calculate SY and CY jointly in ESMs.

  4. Extended consolidation of scaling laws of potentials covering over the representative tandem-mirror operations in GAMMA 10

    International Nuclear Information System (INIS)

    Cho, T.

    2002-01-01

    (i) A verification of our novel proposal of extended consolidation of the two major theories of Cohen's potential formation and Pastukhov's potential effectiveness is carried out by the use of a novel experimental mode with central ECH. The validity of the proposal provides a roadmap of bridging and combining two present representative modes in GAMMA 10 for upgrading to hot-ion plasmas with high potentials. (ii) A novel efficient scaling of ion-confining potential formation due to plug ECH with barrier ECH is constructed as the extension over the IAEA 2000 scaling with plug ECH alone. The combination of the physics scaling of (i) with the externally controllable power scaling of (ii) provides a scalable way for future tandem-mirror researches. The importance of the validity of the present consolidation is highlighted by a possibility of the extended capability inherent in Pastukhov's prediction of requiring 30 kV potentials for a fusion Q of unity with an application of Cohen's potential formation method. (author)

  5. Towards anatomic scale agent-based modeling with a massively parallel spatially explicit general-purpose model of enteric tissue (SEGMEnT_HPC).

    Science.gov (United States)

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.

  6. Predicting Longitudinal Change in Language Production and Comprehension in Individuals with Down Syndrome: Hierarchical Linear Modeling.

    Science.gov (United States)

    Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.

    2002-01-01

    Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…

  7. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    Science.gov (United States)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  8. Modeling the intersections of Food, Energy, and Water in climate-vulnerable Ethiopia with an application to small-scale irrigation

    Science.gov (United States)

    Zhang, Y.; Sankaranarayanan, S.; Zaitchik, B. F.; Siddiqui, S.

    2017-12-01

    Africa is home to some of the most climate vulnerable populations in the world. Energy and agricultural development have diverse impacts on the region's food security and economic well-being from the household to the national level, particularly considering climate variability and change. Our ultimate goal is to understand coupled Food-Energy-Water (FEW) dynamics across spatial scales in order to quantify the sensitivity of critical human outcomes to FEW development strategies in Ethiopia. We are developing bottom-up and top-down multi-scale models, spanning local, sub-national and national scales to capture the FEW linkages across communities and climatic adaptation zones. The focus of this presentation is the sub-national scale multi-player micro-economic (MME) partial-equilibrium model with coupled food and energy sector for Ethiopia. With fixed large-scale economic, demographic, and resource factors from the national scale computable general equilibrium (CGE) model and inferences of behavior parameters from the local scale agent-based model (ABM), the MME studies how shocks such as drought (crop failure) and development of resilience technologies would influence FEW system at a sub-national scale. The MME model is based on aggregating individual optimization problems for relevant players. It includes production, storage, and consumption of food and energy at spatially disaggregated zones, and transportation in between with endogenously modeled infrastructure. The aggregated players for each zone have different roles such as crop producers, storage managers, and distributors, who make decisions according to their own but interdependent objective functions. The food and energy supply chain across zones is therefore captured. Ethiopia is dominated by rain-fed agriculture with only 2% irrigated farmland. Small-scale irrigation has been promoted as a resilience technology that could potentially play a critical role in food security and economic well-being in

  9. The plastic rotation effect in an isotropic gradient plasticity model for applications at the meso scale

    NARCIS (Netherlands)

    Poh, Leong Hien; Peerlings, R.H.J.

    2016-01-01

    Although formulated to represent a large system of polycrystals at the macroscopic level, isotropic gradient plasticity models have routinely been adopted at the meso scale. For such purposes, it is crucial to incorporate the plastic rotation effect in order to obtain a reasonable approximation of

  10. On scaling of human body models

    Directory of Open Access Journals (Sweden)

    Hynčík L.

    2007-10-01

    Full Text Available Human body is not an unique being, everyone is another from the point of view of anthropometry and mechanical characteristics which means that division of the human body population to categories like 5%-tile, 50%-tile and 95%-tile from the application point of view is not enough. On the other hand, the development of a particular human body model for all of us is not possible. That is why scaling and morphing algorithms has started to be developed. The current work describes the development of a tool for scaling of the human models. The idea is to have one (or couple of standard model(s as a base and to create other models based on these basic models. One has to choose adequate anthropometrical and biomechanical parameters that describe given group of humans to be scaled and morphed among.

  11. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, Christian [Monash Univ., Melbourne, VIC (Australia)

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  12. Drift-Scale THC Seepage Model

    Energy Technology Data Exchange (ETDEWEB)

    C.R. Bryan

    2005-02-17

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral

  13. Developing Multi-Level Institutions from Top-Down Ancestors

    Directory of Open Access Journals (Sweden)

    Martha Dowsley

    2007-11-01

    Full Text Available The academic literature contains numerous examples of the failures of both top-down and bottom-up common pool resource management frameworks. Many authors agree that management regimes instead need to utilize a multi-level governance approach to meet diverse objectives in management. However, many currently operating systems do not have that history. This paper explores the conversion of ancestral top-down regimes to complex systems involving multiple scales, levels and objectives through the management of the polar bear (Ursus maritimus in its five range countries. The less successful polar bear management systems continue to struggle with the challenges of developing institutions with the capacity to learn and change, addressing multiple objectives while recognizing the conservation backbone to management, and matching the institutional scale with biophysical, economic and social scales. The comparatively successful institutions incorporate these features, but reveal on-going problems with vertical links that are partially dealt with through the creation of links to other groups.

  14. An Efficient Upscaling Process Based on a Unified Fine-scale Multi-Physics Model for Flow Simulation in Naturally Fracture Carbonate Karst Reservoirs

    KAUST Repository

    Bi, Linfeng

    2009-01-01

    The main challenges in modeling fluid flow through naturally-fractured carbonate karst reservoirs are how to address various flow physics in complex geological architectures due to the presence of vugs and caves which are connected via fracture networks at multiple scales. In this paper, we present a unified multi-physics model that adapts to the complex flow regime through naturally-fractured carbonate karst reservoirs. This approach generalizes Stokes-Brinkman model (Popov et al. 2007). The fracture networks provide the essential connection between the caves in carbonate karst reservoirs. It is thus very important to resolve the flow in fracture network and the interaction between fractures and caves to better understand the complex flow behavior. The idea is to use Stokes-Brinkman model to represent flow through rock matrix, void caves as well as intermediate flows in very high permeability regions and to use an idea similar to discrete fracture network model to represent flow in fracture network. Consequently, various numerical solution strategies can be efficiently applied to greatly improve the computational efficiency in flow simulations. We have applied this unified multi-physics model as a fine-scale flow solver in scale-up computations. Both local and global scale-up are considered. It is found that global scale-up has much more accurate than local scale-up. Global scale-up requires the solution of global flow problems on fine grid, which generally is computationally expensive. The proposed model has the ability to deal with large number of fractures and caves, which facilitate the application of Stokes-Brinkman model in global scale-up computation. The proposed model flexibly adapts to the different flow physics in naturally-fractured carbonate karst reservoirs in a simple and effective way. It certainly extends modeling and predicting capability in efficient development of this important type of reservoir.

  15. Representative Structural Element - A New Paradigm for Multi-Scale Structural Modeling

    Science.gov (United States)

    2016-07-05

    9,10,11]. Prof. Yu was invited to give a seminar with the same title of this project at AFRL WPAFB (Nov. 4, 2013). The host is Dr. Steve Clay of...Aerospace System Directorate. Prof. Yu also had frequent interaction with Dr. Clay regarding damage modeling of composite laminates. He has kindly... viscosity parameter is adopted to yield close predictions of the specimen responses. The load rate and the specimen mass density are properly selected to

  16. Evaluation of a plot-scale methane emission model using eddy covariance observations and footprint modelling

    Directory of Open Access Journals (Sweden)

    A. Budishchev

    2014-09-01

    Full Text Available Most plot-scale methane emission models – of which many have been developed in the recent past – are validated using data collected with the closed-chamber technique. This method, however, suffers from a low spatial representativeness and a poor temporal resolution. Also, during a chamber-flux measurement the air within a chamber is separated from the ambient atmosphere, which negates the influence of wind on emissions. Additionally, some methane models are validated by upscaling fluxes based on the area-weighted averages of modelled fluxes, and by comparing those to the eddy covariance (EC flux. This technique is rather inaccurate, as the area of upscaling might be different from the EC tower footprint, therefore introducing significant mismatch. In this study, we present an approach to validate plot-scale methane models with EC observations using the footprint-weighted average method. Our results show that the fluxes obtained by the footprint-weighted average method are of the same magnitude as the EC flux. More importantly, the temporal dynamics of the EC flux on a daily timescale are also captured (r2 = 0.7. In contrast, using the area-weighted average method yielded a low (r2 = 0.14 correlation with the EC measurements. This shows that the footprint-weighted average method is preferable when validating methane emission models with EC fluxes for areas with a heterogeneous and irregular vegetation pattern.

  17. Assessing Religious Orientations: Replication and Validation of the Commitment-Reflectivity Circumplex (CRC Model

    Directory of Open Access Journals (Sweden)

    Steven L. Isaak

    2017-09-01

    Full Text Available The Commitment-Reflectivity Circumplex (CRC model is a structural model of religious orientation that was designed to help organize and clarify measurement of foundational aspect of religiousness. The current study successfully replicated the CRC model using multidimensional scaling, and further evaluated the reliability, structure, and validity of their measures in both a university student sample (Study 1 and a nationally representative sample (Study 2. All 10 subscales of the Circumplex Religious Orientation Inventory (CROI demonstrated good reliability across both samples. A two-week test-retest of the CROI showed that the subscales are stable over time. A confirmatory factor analysis of the CROI in the representative adult sample demonstrated good model fit. Finally, the CROI’s validity was examined in relation to the Intrinsic, Extrinsic and Quest measures. Overall, the CROI appears to clarify much of the ambiguity inherent in the established scales by breaking down what were very broad orientations into very specific suborientations. The results suggest that the CRC model is applicable for diverse populations of adults. In addition, the CROI appears to be construct valid with good structural and psychometric properties across all 10 subscales.

  18. Overfishing and nutrient pollution interact with temperature to disrupt coral reefs down to microbial scales.

    Science.gov (United States)

    Zaneveld, Jesse R; Burkepile, Deron E; Shantz, Andrew A; Pritchard, Catharine E; McMinds, Ryan; Payet, Jérôme P; Welsh, Rory; Correa, Adrienne M S; Lemoine, Nathan P; Rosales, Stephanie; Fuchs, Corinne; Maynard, Jeffrey A; Thurber, Rebecca Vega

    2016-06-07

    Losses of corals worldwide emphasize the need to understand what drives reef decline. Stressors such as overfishing and nutrient pollution may reduce resilience of coral reefs by increasing coral-algal competition and reducing coral recruitment, growth and survivorship. Such effects may themselves develop via several mechanisms, including disruption of coral microbiomes. Here we report the results of a 3-year field experiment simulating overfishing and nutrient pollution. These stressors increase turf and macroalgal cover, destabilizing microbiomes, elevating putative pathogen loads, increasing disease more than twofold and increasing mortality up to eightfold. Above-average temperatures exacerbate these effects, further disrupting microbiomes of unhealthy corals and concentrating 80% of mortality in the warmest seasons. Surprisingly, nutrients also increase bacterial opportunism and mortality in corals bitten by parrotfish, turning normal trophic interactions deadly for corals. Thus, overfishing and nutrient pollution impact reefs down to microbial scales, killing corals by sensitizing them to predation, above-average temperatures and bacterial opportunism.

  19. Scaling up depot medroxyprogesterone acetate (DMPA): a systematic literature review illustrating the AIDED model.

    Science.gov (United States)

    Curry, Leslie; Taylor, Lauren; Pallas, Sarah Wood; Cherlin, Emily; Pérez-Escamilla, Rafael; Bradley, Elizabeth H

    2013-08-02

    Use of depot medroxyprogesterone acetate (DMPA), often known by the brand name Depo-Provera, has increased globally, particularly in multiple low- and middle-income countries (LMICs). As a reproductive health technology that has scaled up in diverse contexts, DMPA is an exemplar product innovation with which to illustrate the utility of the AIDED model for scaling up family health innovations. We conducted a systematic review of the enabling factors and barriers to scaling up DMPA use in LMICs. We searched 11 electronic databases for academic literature published through January 2013 (n = 284 articles), and grey literature from major health organizations. We applied exclusion criteria to identify relevant articles from peer-reviewed (n = 10) and grey literature (n = 9), extracting data on scale up of DMPA in 13 countries. We then mapped the resulting factors to the five AIDED model components: ASSESS, INNOVATE, DEVELOP, ENGAGE, and DEVOLVE. The final sample of sources included studies representing variation in geographies and methodologies. We identified 15 enabling factors and 10 barriers to dissemination, diffusion, scale up, and/or sustainability of DMPA use. The greatest number of factors were mapped to the ASSESS, DEVELOP, and ENGAGE components. Findings offer early empirical support for the AIDED model, and provide insights into scale up of DMPA that may be relevant for other family planning product innovations.

  20. Hydrogeologic Framework Model for the Saturated Zone Site Scale flow and Transport Model

    Energy Technology Data Exchange (ETDEWEB)

    T. Miller

    2004-11-15

    The purpose of this report is to document the 19-unit, hydrogeologic framework model (19-layer version, output of this report) (HFM-19) with regard to input data, modeling methods, assumptions, uncertainties, limitations, and validation of the model results in accordance with AP-SIII.10Q, Models. The HFM-19 is developed as a conceptual model of the geometric extent of the hydrogeologic units at Yucca Mountain and is intended specifically for use in the development of the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]). Primary inputs to this model report include the GFM 3.1 (DTN: MO9901MWDGFM31.000 [DIRS 103769]), borehole lithologic logs, geologic maps, geologic cross sections, water level data, topographic information, and geophysical data as discussed in Section 4.1. Figure 1-1 shows the information flow among all of the saturated zone (SZ) reports and the relationship of this conceptual model in that flow. The HFM-19 is a three-dimensional (3-D) representation of the hydrogeologic units surrounding the location of the Yucca Mountain geologic repository for spent nuclear fuel and high-level radioactive waste. The HFM-19 represents the hydrogeologic setting for the Yucca Mountain area that covers about 1,350 km2 and includes a saturated thickness of about 2.75 km. The boundaries of the conceptual model were primarily chosen to be coincident with grid cells in the Death Valley regional groundwater flow model (DTN: GS960808312144.003 [DIRS 105121]) such that the base of the site-scale SZ flow model is consistent with the base of the regional model (2,750 meters below a smoothed version of the potentiometric surface), encompasses the exploratory boreholes, and provides a framework over the area of interest for groundwater flow and radionuclide transport modeling. In depth, the model domain extends from land surface to the base of the regional groundwater flow model (D'Agnese et al. 1997 [DIRS 100131], p 2). For the site-scale

  1. Hydrogeologic Framework Model for the Saturated Zone Site Scale flow and Transport Model

    International Nuclear Information System (INIS)

    Miller, T.

    2004-01-01

    The purpose of this report is to document the 19-unit, hydrogeologic framework model (19-layer version, output of this report) (HFM-19) with regard to input data, modeling methods, assumptions, uncertainties, limitations, and validation of the model results in accordance with AP-SIII.10Q, Models. The HFM-19 is developed as a conceptual model of the geometric extent of the hydrogeologic units at Yucca Mountain and is intended specifically for use in the development of the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]). Primary inputs to this model report include the GFM 3.1 (DTN: MO9901MWDGFM31.000 [DIRS 103769]), borehole lithologic logs, geologic maps, geologic cross sections, water level data, topographic information, and geophysical data as discussed in Section 4.1. Figure 1-1 shows the information flow among all of the saturated zone (SZ) reports and the relationship of this conceptual model in that flow. The HFM-19 is a three-dimensional (3-D) representation of the hydrogeologic units surrounding the location of the Yucca Mountain geologic repository for spent nuclear fuel and high-level radioactive waste. The HFM-19 represents the hydrogeologic setting for the Yucca Mountain area that covers about 1,350 km2 and includes a saturated thickness of about 2.75 km. The boundaries of the conceptual model were primarily chosen to be coincident with grid cells in the Death Valley regional groundwater flow model (DTN: GS960808312144.003 [DIRS 105121]) such that the base of the site-scale SZ flow model is consistent with the base of the regional model (2,750 meters below a smoothed version of the potentiometric surface), encompasses the exploratory boreholes, and provides a framework over the area of interest for groundwater flow and radionuclide transport modeling. In depth, the model domain extends from land surface to the base of the regional groundwater flow model (D'Agnese et al. 1997 [DIRS 100131], p 2). For the site-scale SZ flow model, the HFM

  2. Multi-scale viscosity model of turbulence for fully-developed channel flows

    International Nuclear Information System (INIS)

    Kriventsev, V.; Yamaguchi, A.; Ninokata, H.

    2001-01-01

    The full text follows. Multi-Scale Viscosity (MSV) model is proposed for estimation of the Reynolds stresses in turbulent fully-developed flow in a straight channel of an arbitrary shape. We assume that flow in an ''ideal'' channel is always stable, i.e. laminar, but turbulence is developing process of external perturbations cased by wall roughness and other factors. We also assume that real flows are always affected by perturbations of every scale lower than the size of the channel. And the turbulence is generated in form of internal, or ''turbulent'' viscosity increase to preserve stability of ''disturbed'' flow. The main idea of MSV can be expressed in the following phenomenological rule: A local deformation of axial velocity can generate the turbulence with the intensity that keeps the value of local turbulent Reynolds number below some critical value. Here, the local turbulent Reynolds number is defined as a product of value of axial velocity deformation for a given scale and generic length of this scale divided by accumulated value of laminar and turbulent viscosity of lower scales. In MSV, the only empirical parameter is the critical Reynolds number that is estimated to be around 100. It corresponds for the largest scale which is hydraulic diameter of the channel and, therefore represents the regular Reynolds number. Thus, the value Re=100 corresponds to conditions when turbulent flow can appear in case of ''significant'' (comparable with size of channel) velocity disturbance in boundary and/or initial conditions for velocity. Of course, most of real flows in channels with relatively smooth walls remain laminar for this small Reynolds number because of absence of such ''significant'' perturbations. MSV model has been applied to the fully-developed turbulent flows in straight channels such as a circular tube and annular channel. Friction factor and velocity profiles predicted with MSV are in a very good agreement with numerous experimental data. Position of

  3. Discrete Element Method simulations of standing jumps in granular flows down inclines

    Directory of Open Access Journals (Sweden)

    Méjean Ségolène

    2017-01-01

    Full Text Available This paper describes a numerical set-up which uses Discrete Element Method to produce standing jumps in flows of dry granular materials down a slope in two dimensions. The grain-scale force interactions are modeled by a visco-elastic normal force and an elastic tangential force with a Coulomb threshold. We will show how it is possible to reproduce all the shapes of the jumps observed in a previous laboratory study: diffuse versus steep jumps and compressible versus incompressible jumps. Moreover, we will discuss the additional measurements that can be done thanks to discrete element modelling.

  4. Considering the spatial-scale factor when modelling sustainable land management.

    Science.gov (United States)

    Bouma, Johan

    2015-04-01

    Considering the spatial-scale factor when modelling sustainable land management. J.Bouma Em.prof. soil science, Wageningen University, Netherlands. Modelling soil-plant processes is a necessity when exploring future effects of climate change and innovative soil management on agricultural productivity. Soil data are needed to run models and traditional soil maps and the associated databases (based on various soil Taxonomies ), have widely been applied to provide such data obtained at "representative" points in the field. Pedotransferfunctions (PTF)are used to feed simulation models, statistically relating soil survey data ( obtained at a given point in the landscape) to physical parameters for simulation, thus providing a link with soil functionality. Soil science has a basic problem: their object of study is invisible. Only point data are obtained by augering or in pits. Only occasionally roadcuts provide a better view. Extrapolating point to area data is essential for all applications and presents a basic problem for soil science, because mapping units on soil maps, named for a given soil type,may also contain other soil types and quantitative information about the composition of soil map units is usually not available. For detailed work at farm level ( 1:5000-1:10000), an alternative procedure is proposed. Based on a geostatistical analysis, onsite soil observations are made in a grid pattern with spacings based on a geostatistical analysis. Multi-year simulations are made for each point of the functional properties that are relevant for the case being studied, such as the moisture supply capacity, nitrate leaching etc. under standardized boundary conditions to allow comparisons. Functional spatial units are derived next by aggregating functional point data. These units, which have successfully functioned as the basis for precision agriculture, do not necessarily correspond with Taxonomic units but when they do the Taxonomic names should be noted . At lower

  5. Downscaling modelling system for multi-scale air quality forecasting

    Science.gov (United States)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a

  6. Problematic Social Media Use: Results from a Large-Scale Nationally Representative Adolescent Sample.

    Science.gov (United States)

    Bányai, Fanni; Zsila, Ágnes; Király, Orsolya; Maraz, Aniko; Elekes, Zsuzsanna; Griffiths, Mark D; Andreassen, Cecilie Schou; Demetrovics, Zsolt

    2017-01-01

    Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic) use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD). Using the Bergen Social Media Addiction Scale (BSMAS) and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

  7. Meso-scale modelling of the heat conductivity effect on the shock response of a porous material

    Science.gov (United States)

    Resnyansky, A. D.

    2017-06-01

    Understanding of deformation mechanisms of porous materials under shock compression is important for tailoring material properties at the shock manufacturing of advanced materials from substrate powders and for studying the response of porous materials under shock loading. Numerical set-up of the present work considers a set of solid particles separated by air representing a volume of porous material. Condensed material in the meso-scale set-up is simulated with a viscoelastic rate sensitive material model with heat conduction formulated from the principles of irreversible thermodynamics. The model is implemented in the CTH shock physics code. The meso-scale CTH simulation of the shock loading of the representative volume reveals the mechanism of pore collapse and shows in detail the transition from a high porosity case typical for abnormal Hugoniot response to a moderate porosity case typical for conventional Hugoniot response. Results of the analysis agree with previous analytical considerations and support hypotheses used in the two-phase approach.

  8. Constructing reservoir-scale 3D geomechanical FE-models. A refined workflow for model generation and calculation

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, K.; Henk, A. [Technische Univ. Darmstadt (Germany). Inst. fuer Angewandte Geowissenschaften

    2013-08-01

    The tectonic stress field strongly affects the optimal exploitation of conventional and unconventional hydrocarbon reservoirs. Amongst others, wellbore stability, orientation of hydraulically induced fractures and - particularly in fractured reservoirs - permeability anisotropies depend on the magnitudes and orientations of the recent stresses. Geomechanical reservoir models can provide unique insights into the tectonic stress field revealing the local perturbations resulting from faults and lithological changes. In order to provide robust predictions, such numerical models are based on the finite element (FE) method and account for the complexities of real reservoirs with respect to subsurface geometry, inhomogeneous material distribution and nonlinear rock mechanical behavior. We present a refined workflow for geomechanical reservoir modeling which allows for an easier set-up of the model geometry, high resolution submodels and faster calculation times due to element savings in the load frame. Transferring the reservoir geometry from the geological subsurface model, e.g., a Petrel {sup registered} project, to the FE model represents a special challenge as the faults are discontinuities in the numerical model and no direct interface exists between the two software packages used. Point clouds displaying faults and lithostratigraphic horizons can be used for geometry transfer but this labor-intensive approach is not feasible for complex field-scale models with numerous faults. Instead, so-called Coon's patches based on horizon lines, i.e. the intersection lines between horizons and faults, are well suited to re-generate the various surfaces in the FE software while maintaining their topology. High-resolution submodels of individual fault blocks can be incorporated into the field-scale model. This allows to consider both a locally refined mechanical stratigraphy and the impact of the large-scale fault pattern. A pressure load on top of the model represents the

  9. Modeling Fluid’s Dynamics with Master Equations in Ultrametric Spaces Representing the Treelike Structure of Capillary Networks

    Directory of Open Access Journals (Sweden)

    Andrei Khrennikov

    2016-07-01

    Full Text Available We present a new conceptual approach for modeling of fluid flows in random porous media based on explicit exploration of the treelike geometry of complex capillary networks. Such patterns can be represented mathematically as ultrametric spaces and the dynamics of fluids by ultrametric diffusion. The images of p-adic fields, extracted from the real multiscale rock samples and from some reference images, are depicted. In this model the porous background is treated as the environment contributing to the coefficients of evolutionary equations. For the simplest trees, these equations are essentially less complicated than those with fractional differential operators which are commonly applied in geological studies looking for some fractional analogs to conventional Euclidean space but with anomalous scaling and diffusion properties. It is possible to solve the former equation analytically and, in particular, to find stationary solutions. The main aim of this paper is to attract the attention of researchers working on modeling of geological processes to the novel utrametric approach and to show some examples from the petroleum reservoir static and dynamic characterization, able to integrate the p-adic approach with multifractals, thermodynamics and scaling. We also present a non-mathematician friendly review of trees and ultrametric spaces and pseudo-differential operators on such spaces.

  10. Top-down proteomics reveals a unique protein S-thiolation switch in Salmonella Typimurium in response to infection-like conditions

    Energy Technology Data Exchange (ETDEWEB)

    Ansong, Charles; Wu, Si; Meng, Da; Liu, Xiaowen; Brewer, Heather M.; Kaiser, Brooke LD; Nakayasu, Ernesto S.; Cort, John R.; Pevzner, Pavel A.; Smith, Richard D.; Heffron, Fred; Adkins, Joshua N.; Pasa-Tolic, Ljiljana

    2013-06-18

    Characterization of the mature protein complement in cells is crucial for a better understanding of cellular processes on a systems-wide scale. Bottom-up proteomic approaches often lead to loss of critical information about an endogenous protein’s actual state due to post translational modifications (PTMs) and other processes. Top-down approaches that involve analysis of the intact protein can address this concern but present significant analytical challenges related to the separation quality needed, measurement sensitivity, and speed that result in low throughput and limited coverage. Here we used single-dimension ultra high pressure liquid chromatography mass spectrometry to investigate the comprehensive ‘intact’ proteome of the Gram negative bacterial pathogen Salmonella Typhimurium. Top-down proteomics analysis revealed 563 unique proteins including 1665 proteoforms generated by PTMs, representing the largest microbial top-down dataset reported to date. Our analysis not only confirmed several previously recognized aspects of Salmonella biology and bacterial PTMs in general, but also revealed several novel biological insights. Of particular interest was differential utilization of the protein S-thiolation forms S-glutathionylation and S-cysteinylation in response to infection-like conditions versus basal conditions, which was corroborated by changes in corresponding biosynthetic pathways. This differential utilization highlights underlying metabolic mechanisms that modulate changes in cellular signaling, and represents to our knowledge the first report of S-cysteinylation in Gram negative bacteria. The demonstrated utility of our simple proteome-wide intact protein level measurement strategy for gaining biological insight should promote broader adoption and applications of top-down proteomics approaches.

  11. Hybrid reduced order modeling for assembly calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Youngsuk, E-mail: ysbang00@fnctech.com [FNC Technology, Co. Ltd., Yongin-si (Korea, Republic of); Abdel-Khalik, Hany S., E-mail: abdelkhalik@purdue.edu [Purdue University, West Lafayette, IN (United States); Jessee, Matthew A., E-mail: jesseema@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Mertyurek, Ugur, E-mail: mertyurek@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2015-12-15

    Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.

  12. Hybrid reduced order modeling for assembly calculations

    International Nuclear Information System (INIS)

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur

    2015-01-01

    Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.

  13. Scaling analysis for a Savannah River reactor scaled model integral system

    International Nuclear Information System (INIS)

    Boucher, T.J.; Larson, T.K.; McCreery, G.E.; Anderson, J.L.

    1990-11-01

    801The Savannah River Laboratory has requested that the Idaho National Engineering Laboratory perform an analysis to help define, examine, and assess potential concepts for the design of a scaled integral hydraulics test facility representative of the current Savannah River Plant reactor design. In this report the thermal-hydraulic phenomena of importance (based on the knowledge and experience of the authors and the results of the joint INEL/TPG/SRL phenomena identification and ranking effort) to reactor safety during the design basis loss-of-coolant accident were examined and identified. Established scaling methodologies were used to develop potential concepts for integral hydraulic testing facilities. Analysis is conducted to examine the scaling of various phenomena in each of the selected concepts. Results generally support that a one-fourth (1/4) linear scale visual facility capable of operating at pressures up to 350 kPa (51 psia) and temperatures up to 330 K (134 degree F) will scale most hydraulic phenomena reasonably well. However, additional research will be necessary to determine the most appropriate method of simulating several of the reactor components, since the scaling methodology allows for several approaches which may only be assessed via appropriate research. 34 refs., 20 figs., 14 tabs

  14. Land surface evapotranspiration modelling at the regional scale

    Science.gov (United States)

    Raffelli, Giulia; Ferraris, Stefano; Canone, Davide; Previati, Maurizio; Gisolo, Davide; Provenzale, Antonello

    2017-04-01

    Climate change has relevant implications for the environment, water resources and human life in general. The observed increment of mean air temperature, in addition to a more frequent occurrence of extreme events such as droughts, may have a severe effect on the hydrological cycle. Besides climate change, land use changes are assumed to be another relevant component of global change in terms of impacts on terrestrial ecosystems: socio-economic changes have led to conversions between meadows and pastures and in most cases to a complete abandonment of grasslands. Water is subject to different physical processes among which evapotranspiration (ET) is one of the most significant. In fact, ET plays a key role in estimating crop growth, water demand and irrigation water management, so estimating values of ET can be crucial for water resource planning, irrigation requirement and agricultural production. Potential evapotranspiration (PET) is the amount of evaporation that occurs when a sufficient water source is available. It can be estimated just knowing temperatures (mean, maximum and minimum) and solar radiation. Actual evapotranspiration (AET) is instead the real quantity of water which is consumed by soil and vegetation; it is obtained as a fraction of PET. The aim of this work was to apply a simplified hydrological model to calculate AET for the province of Turin (Italy) in order to assess the water content and estimate the groundwater recharge at a regional scale. The soil is seen as a bucket (FAO56 model, Allen et al., 1998) made of different layers, which interact with water and vegetation. The water balance is given by precipitations (both rain and snow) and dew as positive inputs, while AET, runoff and drainage represent the rate of water escaping from soil. The difference between inputs and outputs is the water stock. Model data inputs are: soil characteristics (percentage of clay, silt, sand, rocks and organic matter); soil depth; the wilting point (i.e. the

  15. Looking for a relevant potential evapotranspiration model at the watershed scale

    Science.gov (United States)

    Oudin, L.; Hervieu, F.; Michel, C.; Perrin, C.; Anctil, F.; Andréassian, V.

    2003-04-01

    In this paper, we try to identify the most relevant approach to calculate Potential Evapotranspiration (PET) for use in a daily watershed model, to try to bring an answer to the following question: "how can we use commonly available atmospheric parameters to represent the evaporative demand at the catchment scale?". Hydrologists generally see the Penman model as the ideal model regarding to its good adequacy with lysimeter measurements and its physically-based formulation. However, in real-world engineering situations, where meteorological stations are scarce, hydrologists are often constrained to use other PET formulae with less data requirements or/and long-term average of PET values (the rationale being that PET is an inherently conservative variable). We chose to test 28 commonly used PET models coupled with 4 different daily watershed models. For each test, we compare both PET input options: actual data and long-term average data. The comparison is made in terms of streamflow simulation efficiency, over a large sample of 308 watersheds. The watersheds are located in France, Australia and the United States of America and represent varied climates. Strikingly, we find no systematic improvements of the watershed model efficiencies when using actual PET series instead of long-term averages. This suggests either that watershed models may not conveniently use the climatic information contained in PET values or that formulae are only awkward indicators of the real PET which watershed models need.

  16. X and Y scaling

    International Nuclear Information System (INIS)

    West, G.B.

    1988-01-01

    Although much of the intuition for interpreting the high energy data as scattering from structureless constituents came from nuclear physics (and to a lesser extent atomic physics) virtually no data existed for nuclear targets in the non-relativistic regime until relatively recently. It is therefore not so surprising that,in site of the fact that the basic nuclear physics has been well understood for a very long time, the corresponding non-relativistic scaling law was not written down until after the relativistic one,relevant to particle physics, had been explored. Of course, to the extent that these scaling laws simply reflect quasi-elastic scattering of the probe from the constituents, they contain little new physics once the nature of the constitutents is known and understood. On the other hand, deviations from scaling represent corrections to the impulse approximation and can reflect important dynamical and coherent features of the system. Furthermore, as will be discussed in detail here, the scaling curve itself represents the single particle momentum distribution of constituents inside the target. It is therefore prudent to plot the data in terms of a suitable scaling variable since this immediately focuses attention on the dominant physics. Extraneous physics, such as Rutherford scattering in the case of electrons, or magnetic scattering in the case of thermal neutrons is factored out and the use of a scaling variable (such as y) automatically takes into account the fact that the target is a bound state of well-defined constituents. In this talk I shall concentrate almost entirely on non-relativistic systems. Although the formalism applies equally well to both electron scattering from nuclei and thermal neutron scattering from liquids, I shall, because of my background, usually be thinking of the former. On the other hand I shall completely ignore spin considerations so, ironically, the results actually apply more to the latter case!

  17. The autistic phenotype in Down syndrome: differences in adaptive behaviour versus Down syndrome alone and autistic disorder alone.

    Science.gov (United States)

    Dressler, Anastasia; Perelli, Valentina; Bozza, Margherita; Bargagna, Stefania

    2011-01-01

    The autistic phenotype in Down syndrome (DS) is marked by a characteristic pattern of stereotypies, anxiety and social withdrawal. Our aim was to study adaptive behaviour in DS with and without autistic comorbidity using the Vineland Adaptive Behaviour Scales (VABS), the Childhood Autism Rating Scales (CARS) and the DSM IV-TR criteria. We assessed 24 individuals and established three groups: Down syndrome (DS), DS and autistic disorder (DS-AD), and autistic disorder (AD). The DS and DS-AD groups showed statistically significantly similar strengths on the VABS (in receptive and domestic skills). The DS and DS-AD subjects also showed similar strengths on the CARS (in imitation and relating), differing significantly from the AD group. The profile of adaptive functioning and symptoms in DS-AD seemed to be more similar to that found in DS than to the profile emerging in AD. We suggest that the comorbidity of austistic symptoms in DS hampered the acquisition of adaptive skills more than did the presence of DS alone.

  18. Phosphotyrosine-based-phosphoproteomics scaled-down to biopsy level for analysis of individual tumor biology and treatment selection.

    Science.gov (United States)

    Labots, Mariette; van der Mijn, Johannes C; Beekhof, Robin; Piersma, Sander R; de Goeij-de Haas, Richard R; Pham, Thang V; Knol, Jaco C; Dekker, Henk; van Grieken, Nicole C T; Verheul, Henk M W; Jiménez, Connie R

    2017-06-06

    Mass spectrometry-based phosphoproteomics of cancer cell and tissue lysates provides insight in aberrantly activated signaling pathways and potential drug targets. For improved understanding of individual patient's tumor biology and to allow selection of tyrosine kinase inhibitors in individual patients, phosphoproteomics of small clinical samples should be feasible and reproducible. We aimed to scale down a pTyr-phosphopeptide enrichment protocol to biopsy-level protein input and assess reproducibility and applicability to tumor needle biopsies. To this end, phosphopeptide immunoprecipitation using anti-phosphotyrosine beads was performed using 10, 5 and 1mg protein input from lysates of colorectal cancer (CRC) cell line HCT116. Multiple needle biopsies from 7 human CRC resection specimens were analyzed at the 1mg-level. The total number of phosphopeptides captured and detected by LC-MS/MS ranged from 681 at 10mg input to 471 at 1mg HCT116 protein. ID-reproducibility ranged from 60.5% at 10mg to 43.9% at 1mg. Per 1mg-level biopsy sample, >200 phosphopeptides were identified with 57% ID-reproducibility between paired tumor biopsies. Unsupervised analysis clustered biopsies from individual patients together and revealed known and potential therapeutic targets. This study demonstrates the feasibility of label-free pTyr-phosphoproteomics at the tumor biopsy level based on reproducible analyses using 1mg of protein input. The considerable number of identified phosphopeptides at this level is attributed to an effective down-scaled immuno-affinity protocol as well as to the application of ID propagation in the data processing and analysis steps. Unsupervised cluster analysis reveals patient-specific profiles. Together, these findings pave the way for clinical trials in which pTyr-phosphoproteomics will be performed on pre- and on-treatment biopsies. Such studies will improve our understanding of individual tumor biology and may enable future p

  19. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    Science.gov (United States)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  20. A strategy for representing the effects of convective momentum transport in multiscale models: Evaluation using a new superparameterized version of the Weather Research and Forecast model (SP-WRF)

    Science.gov (United States)

    Tulich, S. N.

    2015-06-01

    This paper describes a general method for the treatment of convective momentum transport (CMT) in large-scale dynamical solvers that use a cyclic, two-dimensional (2-D) cloud-resolving model (CRM) as a "superparameterization" of convective-system-scale processes. The approach is similar in concept to traditional parameterizations of CMT, but with the distinction that both the scalar transport and diagnostic pressure gradient force are calculated using information provided by the 2-D CRM. No assumptions are therefore made concerning the role of convection-induced pressure gradient forces in producing up or down-gradient CMT. The proposed method is evaluated using a new superparameterized version of the Weather Research and Forecast model (SP-WRF) that is described herein for the first time. Results show that the net effect of the formulation is to modestly reduce the overall strength of the large-scale circulation, via "cumulus friction." This statement holds true for idealized simulations of two types of mesoscale convective systems, a squall line, and a tropical cyclone, in addition to real-world global simulations of seasonal (1 June to 31 August) climate. In the case of the latter, inclusion of the formulation is found to improve the depiction of key synoptic modes of tropical wave variability, in addition to some aspects of the simulated time-mean climate. The choice of CRM orientation is also found to importantly affect the simulated time-mean climate, apparently due to changes in the explicit representation of wide-spread shallow convective regions.

  1. Blueprints of the no-scale multiverse at the LHC

    International Nuclear Information System (INIS)

    Li Tianjun; Maxin, James A.; Nanopoulos, Dimitri V.; Walker, Joel W.

    2011-01-01

    We present a contemporary perspective on the String Landscape and the Multiverse of plausible string, M- and F-theory vacua. In contrast to traditional statistical classifications and capitulation to the anthropic principle, we seek only to demonstrate the existence of a nonzero probability for a universe matching our own observed physics within the solution ensemble. We argue for the importance of No-Scale Supergravity as an essential common underpinning for the spontaneous emergence of a cosmologically flat universe from the quantum 'nothingness'. Concretely, we continue to probe the phenomenology of a specific model which is testable at the LHC and Tevatron. Dubbed No-Scale F-SU(5), it represents the intersection of the Flipped SU(5) Grand Unified Theory (GUT) with extra TeV-Scale vectorlike multiplets derived out of F-theory, and the dynamics of No-Scale Supergravity, which in turn imply a very restricted set of high-energy boundary conditions. By secondarily minimizing the minimum of the scalar Higgs potential, we dynamically determine the ratio tanβ≅15-20 of up- to down-type Higgs vacuum expectation values (VEVs), the universal gaugino boundary mass M 1/2 ≅450 GeV, and, consequently, also the total magnitude of the GUT-scale Higgs VEVs, while constraining the low-energy standard model gauge couplings. In particular, this local minimum minimorum lies within the previously described ''golden strip,'' satisfying all current experimental constraints. We emphasize, however, that the overarching goal is not to establish why our own particular universe possesses any number of specific characteristics, but rather to tease out what generic principles might govern the superset of all possible universes.

  2. Accounting for Unresolved Spatial Variability in Large Scale Models: Development and Evaluation of a Statistical Cloud Parameterization with Prognostic Higher Order Moments

    Energy Technology Data Exchange (ETDEWEB)

    Robert Pincus

    2011-05-17

    This project focused on the variability of clouds that is present across a wide range of scales ranging from the synoptic to the millimeter. In particular, there is substantial variability in cloud properties at scales smaller than the grid spacing of models used to make climate projections (GCMs) and weather forecasts. These models represent clouds and other small-scale processes with parameterizations that describe how those processes respond to and feed back on the largescale state of the atmosphere.

  3. [Parenting Stress in Mothers of Children with Down Syndrome in Preschool Age].

    Science.gov (United States)

    Sarimski, Klaus

    2017-11-01

    Parenting Stress in Mothers of Children with Down Syndrome in Preschool Age Research suggests that parenting stress is elevated in parents of children with intellectual disabilities. However, data are inconsistent if this holds true for parents of children with Down syndrome. As part of the Heidelberg Down syndrome study, 52 mothers of children with Down syndrome (mean age: 5 years) completed the German adaptation of the Parenting Stress Index. These results show significantly elevated stress scores in scales measuring demanding and less acceptable behavior of the children (child characteristics). Scores in scales measuring parent characteristics do not differ significantly from the norms. Global stress scores are associated with the degree of behavioral problems (SDQ) and adaptive competence (VABS-II). A regression analysis points to optimism as a dispositional trait of the mother which makes a significant contribution to the prediction of parenting stress scores. The implications for early intervention are discussed.

  4. A Two-Factor Model Better Explains Heterogeneity in Negative Symptoms: Evidence from the Positive and Negative Syndrome Scale.

    Science.gov (United States)

    Jang, Seon-Kyeong; Choi, Hye-Im; Park, Soohyun; Jaekal, Eunju; Lee, Ga-Young; Cho, Young Il; Choi, Kee-Hong

    2016-01-01

    Acknowledging separable factors underlying negative symptoms may lead to better understanding and treatment of negative symptoms in individuals with schizophrenia. The current study aimed to test whether the negative symptoms factor (NSF) of the Positive and Negative Syndrome Scale (PANSS) would be better represented by expressive and experiential deficit factors, rather than by a single factor model, using confirmatory factor analysis (CFA). Two hundred and twenty individuals with schizophrenia spectrum disorders completed the PANSS; subsamples additionally completed the Brief Negative Symptom Scale (BNSS) and the Motivation and Pleasure Scale-Self-Report (MAP-SR). CFA results indicated that the two-factor model fit the data better than the one-factor model; however, latent variables were closely correlated. The two-factor model's fit was significantly improved by accounting for correlated residuals between N2 (emotional withdrawal) and N6 (lack of spontaneity and flow of conversation), and between N4 (passive social withdrawal) and G16 (active social avoidance), possibly reflecting common method variance. The two NSF factors exhibited differential patterns of correlation with subdomains of the BNSS and MAP-SR. These results suggest that the PANSS NSF would be better represented by a two-factor model than by a single-factor one, and support the two-factor model's adequate criterion-related validity. Common method variance among several items may be a potential source of measurement error under a two-factor model of the PANSS NSF.

  5. Top-down beta rhythms support selective attention via interlaminar interaction: a model.

    Directory of Open Access Journals (Sweden)

    Jung H Lee

    Full Text Available Cortical rhythms have been thought to play crucial roles in our cognitive abilities. Rhythmic activity in the beta frequency band, around 20 Hz, has been reported in recent studies that focused on neural correlates of attention, indicating that top-down beta rhythms, generated in higher cognitive areas and delivered to earlier sensory areas, can support attentional gain modulation. To elucidate functional roles of beta rhythms and underlying mechanisms, we built a computational model of sensory cortical areas. Our simulation results show that top-down beta rhythms can activate ascending synaptic projections from L5 to L4 and L2/3, responsible for biased competition in superficial layers. In the simulation, slow-inhibitory interneurons are shown to resonate to the 20 Hz input and modulate the activity in superficial layers in an attention-related manner. The predicted critical roles of these cells in attentional gain provide a potential mechanism by which cholinergic drive can support selective attention.

  6. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  7. Replication of Non-Trivial Directional Motion in Multi-Scales Observed by the Runs Test

    Science.gov (United States)

    Yura, Yoshihiro; Ohnishi, Takaaki; Yamada, Kenta; Takayasu, Hideki; Takayasu, Misako

    Non-trivial autocorrelation in up-down statistics in financial market price fluctuation is revealed by a multi-scale runs test(Wald-Wolfowitz test). We apply two models, a stochastic price model and dealer model to understand this property. In both approaches we successfully reproduce the non-stationary directional price motions consistent with the runs test by tuning parameters in the models. We find that two types of dealers exist in the markets, a short-time-scale trend-follower and an extended-time-scale contrarian who are active in different time periods.

  8. A constitutive model for representing coupled creep, fracture, and healing in rock salt

    International Nuclear Information System (INIS)

    Chan, K.S.; Bodner, S.R.; Munson, D.E.; Fossum, A.F.

    1996-01-01

    The development of a constitutive model for representing inelastic flow due to coupled creep, damage, and healing in rock salt is present in this paper. This model, referred to as Multimechanism Deformation Coupled Fracture model, has been formulated by considering individual mechanisms that include dislocation creep, shear damage, tensile damage, and damage healing. Applications of the model to representing the inelastic flow and fracture behavior of WIPP salt subjected to creep, quasi-static loading, and damage healing conditions are illustrated with comparisons of model calculations against experimental creep curves, stress-strain curves, strain recovery curves, time-to-rupture data, and fracture mechanism maps

  9. Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    G. Zyvoloski

    2003-01-01

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca Mountain using FEHM V 2.20 are being

  10. Using Top-down and Bottom-up Costing Approaches in LMICs: The Case for Using Both to Assess the Incremental Costs of New Technologies at Scale.

    Science.gov (United States)

    Cunnama, Lucy; Sinanovic, Edina; Ramma, Lebogang; Foster, Nicola; Berrie, Leigh; Stevens, Wendy; Molapo, Sebaka; Marokane, Puleng; McCarthy, Kerrigan; Churchyard, Gavin; Vassall, Anna

    2016-02-01

    Estimating the incremental costs of scaling-up novel technologies in low-income and middle-income countries is a methodologically challenging and substantial empirical undertaking, in the absence of routine cost data collection. We demonstrate a best practice pragmatic approach to estimate the incremental costs of new technologies in low-income and middle-income countries, using the example of costing the scale-up of Xpert Mycobacterium tuberculosis (MTB)/resistance to riframpicin (RIF) in South Africa. We estimate costs, by applying two distinct approaches of bottom-up and top-down costing, together with an assessment of processes and capacity. The unit costs measured using the different methods of bottom-up and top-down costing, respectively, are $US16.9 and $US33.5 for Xpert MTB/RIF, and $US6.3 and $US8.5 for microscopy. The incremental cost of Xpert MTB/RIF is estimated to be between $US14.7 and $US17.7. While the average cost of Xpert MTB/RIF was higher than previous studies using standard methods, the incremental cost of Xpert MTB/RIF was found to be lower. Costs estimates are highly dependent on the method used, so an approach, which clearly identifies resource-use data collected from a bottom-up or top-down perspective, together with capacity measurement, is recommended as a pragmatic approach to capture true incremental cost where routine cost data are scarce. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.

  11. Spastic quadriplegia in Down syndrome with congenital duodenal stenosis/atresia.

    Science.gov (United States)

    Kurosawa, Kenji; Enomoto, Keisuke; Tominaga, Makiko; Furuya, Noritaka; Sameshima, Kiyoko; Iai, Mizue; Take, Hiroshi; Shinkai, Masato; Ishikawa, Hiroshi; Yamanaka, Michiko; Matsui, Kiyoshi; Masuno, Mitsuo

    2012-06-01

    Down syndrome is an autosomal chromosome disorder, characterized by intellectual disability and muscle hypotonia. Muscle hypotonia is observed from neonates to adulthood in Down syndrome patients, but muscle hypertonicity is extremely unusual in this syndrome. During a study period of nine years, we found three patients with severe spastic quadriplegia among 20 cases with Down syndrome and congenital duodenal stenosis/atresia (3/20). However, we could find no patient with spastic quadriplegia among 644 cases with Down syndrome without congenital duodenal stenosis/atresia during the same period (0/644, P quadriplegia among 17 patients with congenital duodenal stenosis/atresia without Down syndrome admitted during the same period to use as a control group (0/17, P quadriplegia in patients with Down syndrome. Long-term survival is improving, and the large majority of people with Down syndrome are expected to live well into adult life. Management and further study for the various problems, representing a low prevalence but serious and specific to patients with Down syndrome, are required to improve their quality of life. © 2012 The Authors. Congenital Anomalies © 2012 Japanese Teratology Society.

  12. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  13. Modelling of fractured reservoirs. Case of multi-scale media; Modelisation des reservoirs fractures. Cas des milieux multi-echelles

    Energy Technology Data Exchange (ETDEWEB)

    Henn, N.

    2000-12-13

    Some of the most productive oil and gas reservoirs are found in formations crossed by multi-scale fractures/faults. Among them, conductive faults may closely control reservoir performance. However, their modelling encounters numerical and physical difficulties linked with (a) the necessity to keep an explicit representation of faults through small-size grid blocks, (b) the modelling of multiphase flow exchanges between the fault and the neighbouring medium. In this thesis, we propose a physically-representative and numerically efficient modelling approach in order to incorporate sub-vertical conductive faults in single and dual-porosity simulators. To validate our approach and demonstrate its efficiency, simulation results of multiphase displacements in representative field sector models are presented. (author)

  14. Investigating host-pathogen behavior and their interaction using genome-scale metabolic network models.

    Science.gov (United States)

    Sadhukhan, Priyanka P; Raghunathan, Anu

    2014-01-01

    Genome Scale Metabolic Modeling methods represent one way to compute whole cell function starting from the genome sequence of an organism and contribute towards understanding and predicting the genotype-phenotype relationship. About 80 models spanning all the kingdoms of life from archaea to eukaryotes have been built till date and used to interrogate cell phenotype under varying conditions. These models have been used to not only understand the flux distribution in evolutionary conserved pathways like glycolysis and the Krebs cycle but also in applications ranging from value added product formation in Escherichia coli to predicting inborn errors of Homo sapiens metabolism. This chapter describes a protocol that delineates the process of genome scale metabolic modeling for analysing host-pathogen behavior and interaction using flux balance analysis (FBA). The steps discussed in the process include (1) reconstruction of a metabolic network from the genome sequence, (2) its representation in a precise mathematical framework, (3) its translation to a model, and (4) the analysis using linear algebra and optimization. The methods for biological interpretations of computed cell phenotypes in the context of individual host and pathogen models and their integration are also discussed.

  15. Remote sensing based evapotranspiration and runoff modeling of agricultural, forest and urban flux sites in Denmark: From field to macro-scale

    DEFF Research Database (Denmark)

    Bøgh, E.; Poulsen, R.N.; Butts, M.

    2009-01-01

    representing agricultural, forest and urban land surfaces in physically based hydrological modeling makes it possible to reproduce much of the observed variability (48–73%) in stream flow (Q − Qb) when data and modeling is applied at an effective spatial resolution capable of representing land surface...... variability in eddy covariance latent heat fluxes. The “effective” spatial resolution needed to adopt local-scale model parameters for spatial-deterministic hydrological modeling was assessed using a high-spatial resolution (30 m) variogram analysis of the NDVI. The use of the NDVI variogram to evaluate land...

  16. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM) MODELS

    International Nuclear Information System (INIS)

    Y.S. Wu

    2005-01-01

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on water and gas

  17. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Y.S. Wu

    2005-08-24

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on

  18. Problematic Social Media Use: Results from a Large-Scale Nationally Representative Adolescent Sample.

    Directory of Open Access Journals (Sweden)

    Fanni Bányai

    Full Text Available Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD. Using the Bergen Social Media Addiction Scale (BSMAS and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

  19. The use of TOUGH2 for the LBL/USGS 3-dimensional site-scale model of Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Bodvarsson, G.; Chen, G.; Haukwa, C. [Lawrence Berkeley Laboratory, CA (United States)] [and others

    1995-03-01

    The three-dimensional site-scale numerical model of the unsaturated zone at Yucca Mountain is under continuous development and calibration through a collaborative effort between Lawrence Berkeley Laboratory (LBL) and the United States Geological Survey (USGS). The site-scale model covers an area of about 30 km{sup 2} and is bounded by major fault zones to the west (Solitario Canyon Fault), east (Bow Ridge Fault) and perhaps to the north by an unconfirmed fault (Yucca Wash Fault). The model consists of about 5,000 grid blocks (elements) with nearly 20,000 connections between them the grid was designed to represent the most prevalent geological and hydro-geological features of the site including major faults, and layering and bedding of the hydro-geological units. Further information about the three-dimensional site-scale model is given by Wittwer et al. and Bodvarsson et al.

  20. Prey vulnerability limits top-down control and alters reciprocal feedbacks in a subsidized model food web.

    Directory of Open Access Journals (Sweden)

    William I Atlas

    Full Text Available Resource subsidies increase the productivity of recipient food webs and can affect ecosystem dynamics. Subsidies of prey often support elevated predator biomass which may intensify top-down control and reduce the flow of reciprocal subsidies into adjacent ecosystems. However, top-down control in subsidized food webs may be limited if primary consumers posses morphological or behavioral traits that limit vulnerability to predation. In forested streams, terrestrial prey support high predator biomass creating the potential for strong top-down control, however armored primary consumers often dominate the invertebrate assemblage. Using empirically based simulation models, we tested the response of stream food webs to variations in subsidy magnitude, prey vulnerability, and the presence of two top predators. While terrestrial prey inputs increased predator biomass (+12%, the presence of armored primary consumers inhibited top-down control, and diverted most aquatic energy (∼75% into the riparian forest through aquatic insect emergence. Food webs without armored invertebrates experienced strong trophic cascades, resulting in higher algal (∼50% and detrital (∼1600% biomass, and reduced insect emergence (-90%. These results suggest prey vulnerability can mediate food web responses to subsidies, and that top-down control can be arrested even when predator-invulnerable consumers are uncommon (20% regardless of the level of subsidy.

  1. Improvement of blow down model for LEAP code

    International Nuclear Information System (INIS)

    Itooka, Satoshi; Fujimata, Kazuhiro

    2003-03-01

    In Japan Nuclear Cycle Development Institute, the improvement of analysis method for overheating tube rapture was studied for the accident of sodium-water reactions in the steam generator of a fast breeder reactor and the evaluation of heat transfer condition in the tube were carried out based on study of critical heat flux (CHF) and post-CHF heat transfer equation in Light Water Reactors. In this study, the improvement of blow down model for the LEAP code was carried out taking into consideration the above-mentioned evaluation of heat transfer condition. Improvements of the LEAP code were following items. Calculations and verification were performed with the improved LEAP code in order to confirm the code functions. The addition of critical heat flux (CHF) by the formula of Katto and the formula of Tong. The addition of post-CHF heat transfer equation by the formula of Condie-BengstonIV and the formula of Groeneveld 5.9. The physical properties of the water and steam are expanded to the critical conditions of the water. The expansion of the total number of section and the improvement of the input form. The addition of the function to control the valve setting by the PID control model. (author)

  2. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.

    2015-01-01

    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  3. Two scale damage model and related numerical issues for thermo-mechanical high cycle fatigue

    International Nuclear Information System (INIS)

    Desmorat, R.; Kane, A.; Seyedi, M.; Sermage, J.P.

    2007-01-01

    On the idea that fatigue damage is localized at the microscopic scale, a scale smaller than the mesoscopic one of the Representative Volume Element (RVE), a three-dimensional two scale damage model has been proposed for High Cycle Fatigue applications. It is extended here to aniso-thermal cases and then to thermo-mechanical fatigue. The modeling consists in the micro-mechanics analysis of a weak micro-inclusion subjected to plasticity and damage embedded in an elastic meso-element (the RVE of continuum mechanics). The consideration of plasticity coupled with damage equations at micro-scale, altogether with Eshelby-Kroner localization law, allows to compute the value of microscopic damage up to failure for any kind of loading, 1D or 3D, cyclic or random, isothermal or aniso-thermal, mechanical, thermal or thermo-mechanical. A robust numerical scheme is proposed in order to make the computations fast. A post-processor for damage and fatigue (DAMAGE-2005) has been developed. It applies to complex thermo-mechanical loadings. Examples of the representation by the two scale damage model of physical phenomena related to High Cycle Fatigue are given such as the mean stress effect, the non-linear accumulation of damage. Examples of thermal and thermo-mechanical fatigue as well as complex applications on real size testing structure subjected to thermo-mechanical fatigue are detailed. (authors)

  4. A new synoptic scale resolving global climate simulation using the Community Earth System Model

    Science.gov (United States)

    Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana

    2014-12-01

    High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."

  5. Aerosol-cloud interactions in a multi-scale modeling framework

    Science.gov (United States)

    Lin, G.; Ghan, S. J.

    2017-12-01

    Atmospheric aerosols play an important role in changing the Earth's climate through scattering/absorbing solar and terrestrial radiation and interacting with clouds. However, quantification of the aerosol effects remains one of the most uncertain aspects of current and future climate projection. Much of the uncertainty results from the multi-scale nature of aerosol-cloud interactions, which is very challenging to represent in traditional global climate models (GCMs). In contrast, the multi-scale modeling framework (MMF) provides a viable solution, which explicitly resolves the cloud/precipitation in the cloud resolved model (CRM) embedded in the GCM grid column. In the MMF version of community atmospheric model version 5 (CAM5), aerosol processes are treated with a parameterization, called the Explicit Clouds Parameterized Pollutants (ECPP). It uses the cloud/precipitation statistics derived from the CRM to treat the cloud processing of aerosols on the GCM grid. However, this treatment treats clouds on the CRM grid but aerosols on the GCM grid, which is inconsistent with the reality that cloud-aerosol interactions occur on the cloud scale. To overcome the limitation, here, we propose a new aerosol treatment in the MMF: Explicit Clouds Explicit Aerosols (ECEP), in which we resolve both clouds and aerosols explicitly on the CRM grid. We first applied the MMF with ECPP to the Accelerated Climate Modeling for Energy (ACME) model to have an MMF version of ACME. Further, we also developed an alternative version of ACME-MMF with ECEP. Based on these two models, we have conducted two simulations: one with the ECPP and the other with ECEP. Preliminary results showed that the ECEP simulations tend to predict higher aerosol concentrations than ECPP simulations, because of the more efficient vertical transport from the surface to the higher atmosphere but the less efficient wet removal. We also found that the cloud droplet number concentrations are also different between the

  6. Predictive Maturity of Multi-Scale Simulation Models for Fuel Performance

    International Nuclear Information System (INIS)

    Atamturktur, Sez; Unal, Cetin; Hemez, Francois; Williams, Brian; Tome, Carlos

    2015-01-01

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy's resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed framework is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this

  7. Predictive Maturity of Multi-Scale Simulation Models for Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Atamturktur, Sez [Clemson Univ., SC (United States); Unal, Cetin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hemez, Francois [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Brian [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tome, Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-16

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed framework is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this

  8. Aging rather than aneuploidy affects monoamine neurotransmitters in brain regions of Down syndrome mouse models

    NARCIS (Netherlands)

    Dekker, Alain D; Vermeiren, Yannick; Albac, Christelle; Lana-Elola, Eva; Watson-Scales, Sheona; Gibbins, Dorota; Aerts, Tony; Van Dam, Debby; Fisher, Elizabeth M C; Tybulewicz, Victor L J; Potier, Marie-Claude; De Deyn, Peter P

    Altered concentrations of monoamine neurotransmitters and metabolites have been repeatedly found in people with Down syndrome (DS, trisomy 21). Because of the limited availability of human post-mortem tissue, DS mouse models are of great interest to study these changes and the underlying

  9. Modelling controlled VDE's and ramp-down scenarios in ITER

    Science.gov (United States)

    Lodestro, L. L.; Kolesnikov, R. A.; Meyer, W. H.; Pearlstein, L. D.; Humphreys, D. A.; Walker, M. L.

    2011-10-01

    Following the design reviews of recent years, the ITER poloidal-field coil-set design, including in-vessel coils (VS3), and the divertor configuration have settled down. The divertor and its material composition (the latter has not been finalized) affect the development of fiducial equilibria and scenarios together with the coils through constraints on strike-point locations and limits on the PF and control systems. Previously we have reported on our studies simulating controlled vertical events in ITER with the JCT 2001 controller to which we added a PID VS3 circuit. In this paper we report and compare controlled VDE results using an optimized integrated VS and shape controller in the updated configuration. We also present our recent simulations of alternate ramp-down scenarios, looking at the effects of ramp-down time and shape strategies, using these controllers. This work performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344.

  10. Watershed System Model: The Essentials to Model Complex Human-Nature System at the River Basin Scale

    Science.gov (United States)

    Li, Xin; Cheng, Guodong; Lin, Hui; Cai, Ximing; Fang, Miao; Ge, Yingchun; Hu, Xiaoli; Chen, Min; Li, Weiyue

    2018-03-01

    Watershed system models are urgently needed to understand complex watershed systems and to support integrated river basin management. Early watershed modeling efforts focused on the representation of hydrologic processes, while the next-generation watershed models should represent the coevolution of the water-land-air-plant-human nexus in a watershed and provide capability of decision-making support. We propose a new modeling framework and discuss the know-how approach to incorporate emerging knowledge into integrated models through data exchange interfaces. We argue that the modeling environment is a useful tool to enable effective model integration, as well as create domain-specific models of river basin systems. The grand challenges in developing next-generation watershed system models include but are not limited to providing an overarching framework for linking natural and social sciences, building a scientifically based decision support system, quantifying and controlling uncertainties, and taking advantage of new technologies and new findings in the various disciplines of watershed science. The eventual goal is to build transdisciplinary, scientifically sound, and scale-explicit watershed system models that are to be codesigned by multidisciplinary communities.

  11. Meso-Scale Modeling of Spall in a Heterogeneous Two-Phase Material

    Energy Technology Data Exchange (ETDEWEB)

    Springer, Harry Keo [Univ. of California, Davis, CA (United States)

    2008-07-11

    The influence of the heterogeneous second-phase particle structure and applied loading conditions on the ductile spall response of a model two-phase material was investigated. Quantitative metallography, three-dimensional (3D) meso-scale simulations (MSS), and small-scale spall experiments provided the foundation for this study. Nodular ductile iron (NDI) was selected as the model two-phase material for this study because it contains a large and readily identifiable second- phase particle population. Second-phase particles serve as the primary void nucleation sites in NDI and are, therefore, central to its ductile spall response. A mathematical model was developed for the NDI second-phase volume fraction that accounted for the non-uniform particle size and spacing distributions within the framework of a length-scale dependent Gaussian probability distribution function (PDF). This model was based on novel multiscale sampling measurements. A methodology was also developed for the computer generation of representative particle structures based on their mathematical description, enabling 3D MSS. MSS were used to investigate the effects of second-phase particle volume fraction and particle size, loading conditions, and physical domain size of simulation on the ductile spall response of a model two-phase material. MSS results reinforce existing model predictions, where the spall strength metric (SSM) logarithmically decreases with increasing particle volume fraction. While SSM predictions are nearly independent of applied load conditions at lower loading rates, which is consistent with previous studies, loading dependencies are observed at higher loading rates. There is also a logarithmic decrease in SSM for increasing (initial) void size, as well. A model was developed to account for the effects of loading rate, particle size, matrix sound-speed, and, in the NDI-specific case, the probabilistic particle volume fraction model. Small-scale spall experiments were designed

  12. Tandem queue with server slow-down

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2007-01-01

    We study how rare events happen in the standard two-node tandem Jackson queue and in a generalization, the socalled slow-down network, see [2]. In the latter model the service rate of the first server depends on the number of jobs in the second queue: the first server slows down if the amount of

  13. Development of a Watershed-Scale Long-Term Hydrologic Impact Assessment Model with the Asymptotic Curve Number Regression Equation

    Directory of Open Access Journals (Sweden)

    Jichul Ryu

    2016-04-01

    Full Text Available In this study, 52 asymptotic Curve Number (CN regression equations were developed for combinations of representative land covers and hydrologic soil groups. In addition, to overcome the limitations of the original Long-term Hydrologic Impact Assessment (L-THIA model when it is applied to larger watersheds, a watershed-scale L-THIA Asymptotic CN (ACN regression equation model (watershed-scale L-THIA ACN model was developed by integrating the asymptotic CN regressions and various modules for direct runoff/baseflow/channel routing. The watershed-scale L-THIA ACN model was applied to four watersheds in South Korea to evaluate the accuracy of its streamflow prediction. The coefficient of determination (R2 and Nash–Sutcliffe Efficiency (NSE values for observed versus simulated streamflows over intervals of eight days were greater than 0.6 for all four of the watersheds. The watershed-scale L-THIA ACN model, including the asymptotic CN regression equation method, can simulate long-term streamflow sufficiently well with the ten parameters that have been added for the characterization of streamflow.

  14. Scaling of ion implanted Si:P single electron devices

    International Nuclear Information System (INIS)

    Escott, C C; Hudson, F E; Chan, V C; Petersson, K D; Clark, R G; Dzurak, A S

    2007-01-01

    We present a modelling study on the scaling prospects for phosphorus in silicon (Si:P) single electron devices using readily available commercial and free-to-use software. The devices comprise phosphorus ion implanted, metallically doped (n + ) dots (size range 50-500 nm) with source and drain reservoirs. Modelling results are compared to measurements on fabricated devices and discussed in the context of scaling down to few-electron structures. Given current fabrication constraints, we find that devices with 70-75 donors per dot should be realizable. We comment on methods for further reducing this number

  15. Scaling of ion implanted Si:P single electron devices

    Energy Technology Data Exchange (ETDEWEB)

    Escott, C C [Centre for Quantum Computer Technology, School of Electrical Engineering and Telecommunications, UNSW, Sydney, NSW 2052 (Australia); Hudson, F E [Centre for Quantum Computer Technology, School of Electrical Engineering and Telecommunications, UNSW, Sydney, NSW 2052 (Australia); Chan, V C [Centre for Quantum Computer Technology, School of Electrical Engineering and Telecommunications, UNSW, Sydney, NSW 2052 (Australia); Petersson, K D [Centre for Quantum Computer Technology, School of Electrical Engineering and Telecommunications, UNSW, Sydney, NSW 2052 (Australia); Clark, R G [Centre for Quantum Computer Technology, School of Physics, UNSW, Sydney, 2052 (Australia); Dzurak, A S [Centre for Quantum Computer Technology, School of Electrical Engineering and Telecommunications, UNSW, Sydney, NSW 2052 (Australia)

    2007-06-13

    We present a modelling study on the scaling prospects for phosphorus in silicon (Si:P) single electron devices using readily available commercial and free-to-use software. The devices comprise phosphorus ion implanted, metallically doped (n{sup +}) dots (size range 50-500 nm) with source and drain reservoirs. Modelling results are compared to measurements on fabricated devices and discussed in the context of scaling down to few-electron structures. Given current fabrication constraints, we find that devices with 70-75 donors per dot should be realizable. We comment on methods for further reducing this number.

  16. Down Syndrome = Sindrome de Down.

    Science.gov (United States)

    Pueschel, S. M.; Glasgow, R. E.

    Presented both in English and Spanish, the brochure is primarily concerned with biological and developmental characteristics of the person with Down's syndrome. An emphasis is on the valuable humanizing influence these individuals have on society. Brief sections in the document discuss the delayed developmental aspects of Down's syndrome; the…

  17. Durango: Scalable Synthetic Workload Generation for Extreme-Scale Application Performance Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Carothers, Christopher D. [Rensselaer Polytechnic Institute (RPI); Meredith, Jeremy S. [ORNL; Blanco, Marc [Rensselaer Polytechnic Institute (RPI); Vetter, Jeffrey S. [ORNL; Mubarak, Misbah [Argonne National Laboratory; LaPre, Justin [Rensselaer Polytechnic Institute (RPI); Moore, Shirley V. [ORNL

    2017-05-01

    Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, that combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when

  18. The Behavioral and Psychological Symptoms of Dementia in Down Syndrome (BPSD-DS) Scale : Comprehensive Assessment of Psychopathology in Down Syndrome

    NARCIS (Netherlands)

    Dekker, Alain D; Sacco, Silvia; Carfi, Angelo; Benejam, Bessy; Vermeiren, Yannick; Beugelsdijk, Gonny; Schippers, Mieke; Hassefras, Lyanne; Eleveld, José; Grefelman, Sharina; Fopma, Roelie; Bomer-Veenboer, Monique; Boti, Mariángeles; Oosterling, G Danielle E; Scholten, Esther; Tollenaere, Marleen; Checkley, Laura; Strydom, André; Van Goethem, Gert; Onder, Graziano; Blesa, Rafael; Zu Eulenburg, Christine; Coppus, Antonia M W; Rebillat, Anne-Sophie; Fortea, Juan; De Deyn, Peter P

    2018-01-01

    People with Down syndrome (DS) are prone to develop Alzheimer's disease (AD). Behavioral and psychological symptoms of dementia (BPSD) are core features, but have not been comprehensively evaluated in DS. In a European multidisciplinary study, the novel Behavioral and Psychological Symptoms of

  19. On the random cascading model study of anomalous scaling in multiparticle production with continuously diminishing scale

    International Nuclear Information System (INIS)

    Liu Lianshou; Zhang Yang; Wu Yuanfang

    1996-01-01

    The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)

  20. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    Science.gov (United States)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  1. Principles of proteome allocation are revealed using proteomic data and genome-scale models

    DEFF Research Database (Denmark)

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.

    2016-01-01

    to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the "generalist" (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions......Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked...... of these sectors for the general stress response sigma factor sigma(S). Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally...

  2. Revisiting non-degenerate parametric down-conversion

    Indian Academy of Sciences (India)

    conversion process is studied by recasting the time evolution equations for the basic op- erators in an equivalent ... We consider a model of non-degenerate parametric down-conversion process com- posed of two coupled ..... e−iωat and eiωbt have been left out in writing down the final results in ref. [4], even though these ...

  3. Analysis of Mental Processes Represented in Models of Artificial Consciousness

    Directory of Open Access Journals (Sweden)

    Luana Folchini da Costa

    2013-12-01

    Full Text Available The Artificial Consciousness concept has been used in the engineering area as being an evolution of the Artificial Intelligence. However, consciousness is a complex subject and often used without formalism. As a main contribution, in this work one proposes an analysis of four recent models of artificial consciousness published in the engineering area. The mental processes represented by these models are highlighted and correlations with the theoretical perspective of cognitive psychology are made. Finally, considerations about consciousness in such models are discussed.

  4. Multi-scale modeling for sustainable chemical production

    DEFF Research Database (Denmark)

    Zhuang, Kai; Bakshi, Bhavik R.; Herrgard, Markus

    2013-01-01

    associated with the development and implementation of a su stainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow......With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes...... models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process...

  5. Evaluating 20th Century precipitation characteristics between multi-scale atmospheric models with different land-atmosphere coupling

    Science.gov (United States)

    Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.

    2016-12-01

    Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.

  6. Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments

    Science.gov (United States)

    Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.

    2015-12-01

    The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide

  7. Site scale groundwater flow in Olkiluoto

    International Nuclear Information System (INIS)

    Loefman, J.

    1999-03-01

    Groundwater flow modelling on the site scale has been an essential part of site investigation work carried out at different locations since 1986. The objective of the modelling has been to provide results that characterise the groundwater flow conditions deep in the bedrock. The main result quantities can be used for evaluation of the investigation sites and of the preconditions for safe final disposal of spent nuclear fuel. This study represents the latest modelling effort at Olkiluoto (Finland), and it comprises the transient flow analysis taking into account the effects of density variations and the repository as well as the post-glacial land uplift. The analysis is performed by means of numerical finite element simulation of coupled and transient groundwater flow and solute transport carried out up to 10000 years into the future. This work provides also the results for the site-specific data needs for the block scale groundwater flow modelling at Olkiluoto. Conceptually the fractured bedrock is divided into hydraulic units: the planar fracture zones and the remaining part of the bedrock. The equivalent-continuum (EC) model is applied so that each hydraulic unit is treated as a homogeneous and isotropic continuum with representative average characteristics. All the fracture zones are modelled explicitly and represented by two-dimensional finite elements. A site-specific simulation model for groundwater flow and solute transport is developed on the basis of the latest hydrogeological and hydrogeochemical field investigations at Olkiluoto. The present groundwater table and topography together with a mathematical model describing the land uplift at the Olkiluoto area are employed as a boundary condition at the surface of the model. The overall flow pattern is mostly controlled by the local variations in the topography. Below the island of Olkiluoto the flow direction is mostly downwards, while near the shoreline and below the sea water flows horizontally and

  8. Site scale groundwater flow in Haestholmen

    Energy Technology Data Exchange (ETDEWEB)

    Loefman, J. [VTT Energy, Espoo (Finland)

    1999-05-01

    Groundwater flow modelling on the site scale has been an essential part of site investigation work carried out at different locations since 1986. The objective of the modelling has been to provide results that characterise the groundwater flow conditions deep in the bedrock. The main result quantities can be used for evaluation of the investigation sites and of the preconditions for safe final disposal - of spent nuclear fuel. This study represents the groundwater flow modelling at Haestholmen, and it comprises the transient flow analysis taking into account the effects of density variations and the repository as well as the post-glacial land uplift. The analysis is performed by means of numerical finite element simulation of coupled and transient groundwater flow and solute transport carried out up to 10000 years into the future. This work provides also the results for the site-specific data needs for the block scale groundwater flow modelling at Haestholmen. Conceptually the fractured bedrock is divided into hydraulic units: the planar fracture zones and the remaining part of the bedrock. The equivalent-continuum (EC) model is applied so that each hydraulic unit is treated as a homogeneous and isotropic continuum with representative average characteristics. All the fracture zones are modelled explicitly and represented by two-dimensional finite elements. A site-specific simulation model for groundwater flow and solute transport is developed on the basis of the latest hydrogeological and hydrogeochemical field investigations at Haestholmen. The present topography together with a mathematical model describing the land uplift at the Haestholmen area are employed as a boundary condition at the surface of the model. The overall flow pattern is mostly controlled by the local variations in the topography and by the highly transmissive fracture zones. Near the surface the flow spreads out to offshore and to the lower areas of topography in all directions away from

  9. Site scale groundwater flow in Olkiluoto

    Energy Technology Data Exchange (ETDEWEB)

    Loefman, J. [VTT Energy, Espoo (Finland)

    1999-03-01

    Groundwater flow modelling on the site scale has been an essential part of site investigation work carried out at different locations since 1986. The objective of the modelling has been to provide results that characterise the groundwater flow conditions deep in the bedrock. The main result quantities can be used for evaluation of the investigation sites and of the preconditions for safe final disposal of spent nuclear fuel. This study represents the latest modelling effort at Olkiluoto (Finland), and it comprises the transient flow analysis taking into account the effects of density variations and the repository as well as the post-glacial land uplift. The analysis is performed by means of numerical finite element simulation of coupled and transient groundwater flow and solute transport carried out up to 10000 years into the future. This work provides also the results for the site-specific data needs for the block scale groundwater flow modelling at Olkiluoto. Conceptually the fractured bedrock is divided into hydraulic units: the planar fracture zones and the remaining part of the bedrock. The equivalent-continuum (EC) model is applied so that each hydraulic unit is treated as a homogeneous and isotropic continuum with representative average characteristics. All the fracture zones are modelled explicitly and represented by two-dimensional finite elements. A site-specific simulation model for groundwater flow and solute transport is developed on the basis of the latest hydrogeological and hydrogeochemical field investigations at Olkiluoto. The present groundwater table and topography together with a mathematical model describing the land uplift at the Olkiluoto area are employed as a boundary condition at the surface of the model. The overall flow pattern is mostly controlled by the local variations in the topography. Below the island of Olkiluoto the flow direction is mostly downwards, while near the shoreline and below the sea water flows horizontally and

  10. Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model

    Directory of Open Access Journals (Sweden)

    Jing-Huai Gao

    2009-12-01

    Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.

  11. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Frew, Bethany A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cole, Wesley J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Richards, James [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-01

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss common modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges

  12. Scaling laws for HTGR core block seismic response

    International Nuclear Information System (INIS)

    Dove, R.C.

    1977-01-01

    This paper discusses the development of scaling laws, physical modeling, and seismic testing of a model designed to represent a High Temperature Gas-Cooled Reactor (HTGR) core consisting of graphite blocks. The establishment of the proper scale relationships for length, time, force, and other parameters is emphasized. Tests to select model materials and the appropriate scales are described. Preliminary results obtained from both model and prototype systems tested under simulated seismic vibration are presented

  13. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  14. Representative Model of the Learning Process in Virtual Spaces Supported by ICT

    Science.gov (United States)

    Capacho, José

    2014-01-01

    This paper shows the results of research activities for building the representative model of the learning process in virtual spaces (e-Learning). The formal basis of the model are supported in the analysis of models of learning assessment in virtual spaces and specifically in Dembo´s teaching learning model, the systemic approach to evaluating…

  15. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  16. A rate-dependent multi-scale crack model for concrete

    NARCIS (Netherlands)

    Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.

    2013-01-01

    A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate

  17. Simulation of left atrial function using a multi-scale model of the cardiovascular system.

    Directory of Open Access Journals (Sweden)

    Antoine Pironet

    Full Text Available During a full cardiac cycle, the left atrium successively behaves as a reservoir, a conduit and a pump. This complex behavior makes it unrealistic to apply the time-varying elastance theory to characterize the left atrium, first, because this theory has known limitations, and second, because it is still uncertain whether the load independence hypothesis holds. In this study, we aim to bypass this uncertainty by relying on another kind of mathematical model of the cardiac chambers. In the present work, we describe both the left atrium and the left ventricle with a multi-scale model. The multi-scale property of this model comes from the fact that pressure inside a cardiac chamber is derived from a model of the sarcomere behavior. Macroscopic model parameters are identified from reference dog hemodynamic data. The multi-scale model of the cardiovascular system including the left atrium is then simulated to show that the physiological roles of the left atrium are correctly reproduced. This include a biphasic pressure wave and an eight-shaped pressure-volume loop. We also test the validity of our model in non basal conditions by reproducing a preload reduction experiment by inferior vena cava occlusion with the model. We compute the variation of eight indices before and after this experiment and obtain the same variation as experimentally observed for seven out of the eight indices. In summary, the multi-scale mathematical model presented in this work is able to correctly account for the three roles of the left atrium and also exhibits a realistic left atrial pressure-volume loop. Furthermore, the model has been previously presented and validated for the left ventricle. This makes it a proper alternative to the time-varying elastance theory if the focus is set on precisely representing the left atrial and left ventricular behaviors.

  18. Extended consolidation of scaling laws of potentials covering over the representative tandem-mirror operations in GAMMA 10

    International Nuclear Information System (INIS)

    Cho, T.; Higaki, H.; Hirata, M.

    2003-01-01

    Scaling laws of potential formation and associated effects are constructed in the GAMMA 10 tandem mirror. A novel proposal of extended consolidation and generalization of the two major theories of (i) Cohen's strong electron cyclotron heating (ECH) theory for the formation physics of plasma confining potentials, and (ii) the generalized Pastukhov theory for the effectiveness of the produced potentials on plasma confinement is made through the use of the energy-balance equation. This proposal is then followed by the verification from experimental data in two representative operational modes, characterized in terms of (i) a high-potential mode having kV-order plasma-confining potentials, and (ii) a hot-ion mode yielding fusion neutrons with 10-20 keV bulk-ion temperatures. The importance of the validity of the proposed consolidated physics-based scaling is highlighted by a possibility of extended capability inherent in Pastukhov's prediction of requiring ion-confining potential (φ c ) of 30 kV for a fusion Q value of unity on the basis of an application of Cohen's potential formation method. In addition to the above potential physics scaling, an externally controllable parameter scaling including both plug and barrier ECH powers for potential formation is investigated. The combination of (i) the physics scaling of the above-proposed consolidation over potential formation and effects with (ii) the externally controllable practical ECH power scaling provides a scalable way for the future tandem-mirror researches. Under the assumption of the validity of the extension of the present theoretically well interpreted scaling, the formation of Pastukhov's predicted φ c for confining Q=1 plasmas is scaled to require total plug with barrier ECH powers of 3 MW. (author)

  19. Investigation of the falling water flow with evaporation for the passive containment cooling system and its scaling-down criteria

    Science.gov (United States)

    Li, Cheng; Li, Junming; Li, Le

    2018-02-01

    Falling water evaporation cooling could efficiently suppress the containment operation pressure during the nuclear accident, by continually removing the core decay heat to the atmospheric environment. In order to identify the process of large-scale falling water evaporation cooling, the water flow characteristics of falling film, film rupture and falling rivulet were deduced, on the basis of previous correlation studies. The influences of the contact angle, water temperature and water flow rates on water converge along the flow direction were then numerically obtained and results were compared with the data for AP1000 and CAP1400 nuclear power plants. By comparisons, it is concluded that the water coverage fraction of falling water could be enhanced by either reducing the surface contact angle or increasing the water temperature. The falling water flow with evaporation for AP1000 containment was then calculated and the feature of its water coverage fraction was analyzed. Finally, based on the phenomena identification of falling water flow for AP1000 containment evaporation cooling, the scaling-down is performed and the dimensionless criteria were obtained.

  20. Magnetic hysteresis at the domain scale of a multi-scale material model for magneto-elastic behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Vanoost, D., E-mail: dries.vanoost@kuleuven-kulak.be [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); Steentjes, S. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany); Peuteman, J. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Electrical Energy and Computer Architecture, Heverlee B-3001 (Belgium); Gielen, G. [KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); De Gersem, H. [KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); TU Darmstadt, Institut für Theorie Elektromagnetischer Felder, Darmstadt D-64289 (Germany); Pissoort, D. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); Hameyer, K. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany)

    2016-09-15

    This paper proposes a multi-scale energy-based material model for poly-crystalline materials. Describing the behaviour of poly-crystalline materials at three spatial scales of dominating physical mechanisms allows accounting for the heterogeneity and multi-axiality of the material behaviour. The three spatial scales are the poly-crystalline, grain and domain scale. Together with appropriate scale transitions rules and models for local magnetic behaviour at each scale, the model is able to describe the magneto-elastic behaviour (magnetostriction and hysteresis) at the macroscale, although the data input is merely based on a set of physical constants. Introducing a new energy density function that describes the demagnetisation field, the anhysteretic multi-scale energy-based material model is extended to the hysteretic case. The hysteresis behaviour is included at the domain scale according to the micro-magnetic domain theory while preserving a valid description for the magneto-elastic coupling. The model is verified using existing measurement data for different mechanical stress levels. - Highlights: • A ferromagnetic hysteretic energy-based multi-scale material model is proposed. • The hysteresis is obtained by new proposed hysteresis energy density function. • Avoids tedious parameter identification.

  1. Evaluation of a micro-scale wind model's performance over realistic building clusters using wind tunnel experiments

    Science.gov (United States)

    Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi

    2016-08-01

    The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.

  2. Experimental studies of heavy-ion slowing down in matter

    International Nuclear Information System (INIS)

    Geissel, H.; Weick, H.; Scheidenberger, C.; Bimbot, R.; Gardes, D.

    2002-08-01

    Measurements of heavy-ion slowing down in matter differ in many aspects from experiments with light particles like protons and α-particles. An overview of the special experimental requirements, methods, data analysis and interpretation is presented for heavy-ion stopping powers, energy- and angular-straggling and ranges in the energy domain from keV/u up to GeV/u. Characteristic experimental results are presented and compared with theory and semiempirical predictions. New applications are outlined, which represent a challenge to continuously improve the knowledge of heavy-ion slowing down. (orig.)

  3. Nonpointlike-parton model with asymptotic scaling and with scaling violationat moderate Q2 values

    International Nuclear Information System (INIS)

    Chen, C.K.

    1981-01-01

    A nonpointlike-parton model is formulated on the basis of the assumption of energy-independent total cross sections of partons and the current-algebra sum rules. No specific strong-interaction Lagrangian density is introduced in this approach. This model predicts asymptotic scaling for the inelastic structure functions of nucleons on the one hand and scaling violation at moderate Q 2 values on the other hand. The predicted scaling-violation patterns at moderate Q 2 values are consistent with the observed scaling-violation patterns. A numerical fit of F 2 functions is performed in order to demonstrate that the predicted scaling-violation patterns of this model at moderate Q 2 values fit the data, and to see how the predicted asymptotic scaling behavior sets in at various x values. Explicit analytic forms of F 2 functions are obtained from this numerical fit, and are compared in detail with the analytic forms of F 2 functions obtained from the numerical fit of the quantum-chromodynamics (QCD) parton model. This comparison shows that this nonpointlike-parton model fits the data better than the QCD parton model, especially at large and small x values. Nachtman moments are computed from the F 2 functions of this model and are shown to agree with data well. It is also shown that the two-dimensional plot of the logarithm of a nonsinglet moment versus the logarithm of another such moment is not a good way to distinguish this nonpointlike-parton model from the QCD parton model

  4. Representing climate, disturbance, and vegetation interactions in landscape models

    Science.gov (United States)

    Robert E. Keane; Donald McKenzie; Donald A. Falk; Erica A.H. Smithwick; Carol Miller; Lara-Karena B. Kellogg

    2015-01-01

    The prospect of rapidly changing climates over the next century calls for methods to predict their effects on myriad, interactive ecosystem processes. Spatially explicit models that simulate ecosystem dynamics at fine (plant, stand) to coarse (regional, global) scales are indispensable tools for meeting this challenge under a variety of possible futures. A special...

  5. Pulsar wind model for the spin-down behavior of intermittent pulsars

    Energy Technology Data Exchange (ETDEWEB)

    Li, L.; Tong, H.; Yan, W. M.; Yuan, J. P.; Wang, N. [Xinjiang Astronomical Observatory, Chinese Academy of Sciences, Urumqi, Xinjiang 830011 (China); Xu, R. X., E-mail: tonghao@xao.ac.cn [School of Physics, Peking University, Beijing (China)

    2014-06-10

    Intermittent pulsars are part-time radio pulsars. They have higher slow down rates in the on state (radio-loud) than in the off state (radio-quiet). This gives evidence that particle wind may play an important role in pulsar spindown. The effect of particle acceleration is included in modeling the rotational energy loss rate of the neutron star. Applying the pulsar wind model to the three intermittent pulsars (PSR B1931+24, PSR J1841–0500, and PSR J1832+0029) allows their magnetic fields and inclination angles to be calculated simultaneously. The theoretical braking indices of intermittent pulsars are also given. In the pulsar wind model, the density of the particle wind can always be the Goldreich-Julian density. This may ensure that different on states of intermittent pulsars are stable. The duty cycle of particle wind can be determined from timing observations. It is consistent with the duty cycle of the on state. Inclination angle and braking index observations of intermittent pulsars may help to test different models of particle acceleration. At present, the inverse Compton scattering induced space charge limited flow with field saturation model can be ruled out.

  6. Generation of reservoir models on flexible meshes; Generation de modeles de reservoir sur maillage flexible

    Energy Technology Data Exchange (ETDEWEB)

    Ricard, L.

    2005-12-15

    The high level geo-statistic description of the subsurface are often far too detailed for use in routine flow simulators. To make flow simulations tractable, the number of grid blocks has to be reduced: an approximation, still relevant with flow description, is necessary. In this work, we place the emphasis on the scaling procedure from the fine scale model to the multi-scale reservoir model. Two main problems appear: Near wells, faults and channels, the volume of flexible cells may be less than fine ones, so we need to solve a down-scaling problem; Far from these regions, the volume of cells are bigger than fine ones so we need to solve an up-scaling problem. In this work, research has been done on each of these three areas: down-scaling, up-scaling and fluid flow simulation. For each of these subjects, a review, some news improvements and comparative study are proposed. The proposed down-scaling method is build to be compatible with existing data integration methods. The comparative study shows that empirical methods are not enough accurate to solve the problem. Concerning the up-scaling step, the proposed approach is based on an existing method: the perturbed boundary conditions. An extension to unstructured mesh is developed for the inter-cell permeability tensor. The comparative study shows that numerical methods are not always as accurate as expected and the empirical model can be sufficient in lot of cases. A new approach to single-phase fluid flow simulation is developed. This approach can handle with full tensorial permeability fields with source or sink terms.(author)

  7. Convex hull approach for determining rock representative elementary volume for multiple petrophysical parameters using pore-scale imaging and Lattice-Boltzmann modelling

    Science.gov (United States)

    Shah, S. M.; Crawshaw, J. P.; Gray, F.; Yang, J.; Boek, E. S.

    2017-06-01

    In the last decade, the study of fluid flow in porous media has developed considerably due to the combination of X-ray Micro Computed Tomography (micro-CT) and advances in computational methods for solving complex fluid flow equations directly or indirectly on reconstructed three-dimensional pore space images. In this study, we calculate porosity and single phase permeability using micro-CT imaging and Lattice Boltzmann (LB) simulations for 8 different porous media: beadpacks (with bead sizes 50 μm and 350 μm), sandpacks (LV60 and HST95), sandstones (Berea, Clashach and Doddington) and a carbonate (Ketton). Combining the observed porosity and calculated single phase permeability, we shed new light on the existence and size of the Representative Element of Volume (REV) capturing the different scales of heterogeneity from the pore-scale imaging. Our study applies the concept of the 'Convex Hull' to calculate the REV by considering the two main macroscopic petrophysical parameters, porosity and single phase permeability, simultaneously. The shape of the hull can be used to identify strong correlation between the parameters or greatly differing convergence rates. To further enhance computational efficiency we note that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size so that only a few small simulations are needed to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.

  8. Coupled climate model simulations of Mediterranean winter cyclones and large-scale flow patterns

    Directory of Open Access Journals (Sweden)

    B. Ziv

    2013-03-01

    Full Text Available The study aims to evaluate the ability of global, coupled climate models to reproduce the synoptic regime of the Mediterranean Basin. The output of simulations of the 9 models included in the IPCC CMIP3 effort is compared to the NCEP-NCAR reanalyzed data for the period 1961–1990. The study examined the spatial distribution of cyclone occurrence, the mean Mediterranean upper- and lower-level troughs, the inter-annual variation and trend in the occurrence of the Mediterranean cyclones, and the main large-scale circulation patterns, represented by rotated EOFs of 500 hPa and sea level pressure. The models reproduce successfully the two maxima in cyclone density in the Mediterranean and their locations, the location of the average upper- and lower-level troughs, the relative inter-annual variation in cyclone occurrences and the structure of the four leading large scale EOFs. The main discrepancy is the models' underestimation of the cyclone density in the Mediterranean, especially in its western part. The models' skill in reproducing the cyclone distribution is found correlated with their spatial resolution, especially in the vertical. The current improvement in model spatial resolution suggests that their ability to reproduce the Mediterranean cyclones would be improved as well.

  9. Comments on intermediate-scale models

    International Nuclear Information System (INIS)

    Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.

    1987-01-01

    Some superstring-inspired models employ intermediate scales m I of gauge symmetry breaking. Such scales should exceed 10 16 GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m I . However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m W ), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m I . We also comment on the danger of baryon and lepton number violation in the effective low-energy theory. (orig.)

  10. Trickle-down boundary conditions in aeolian dune-field pattern formation

    Science.gov (United States)

    Ewing, R. C.; Kocurek, G.

    2015-12-01

    One the one hand, wind-blown dune-field patterns emerge within the overarching boundary conditions of climate, tectonics and eustasy implying the presence of these signals in the aeolian geomorphic and stratigraphic record. On the other hand, dune-field patterns are a poster-child of self-organization, in which autogenic processes give rise to patterned landscapes despite remarkable differences in the geologic setting (i.e., Earth, Mars and Titan). How important are climate, tectonics and eustasy in aeolian dune field pattern formation? Here we develop the hypothesis that, in terms of pattern development, dune fields evolve largely independent of the direct influence of 'system-scale' boundary conditions, such as climate, tectonics and eustasy. Rather, these boundary conditions set the stage for smaller-scale, faster-evolving 'event-scale' boundary conditions. This 'trickle-down' effect, in which system-scale boundary conditions indirectly influence the event scale boundary conditions provides the uniqueness and richness of dune-field patterned landscapes. The trickle-down effect means that the architecture of the stratigraphic record of dune-field pattern formation archives boundary conditions, which are spatially and temporally removed from the overarching geologic setting. In contrast, the presence of an aeolian stratigraphic record itself, reflects changes in system-scale boundary conditions that drive accumulation and preservation of aeolian strata.

  11. Towards representing human behavior and decision making in Earth system models - an overview of techniques and approaches

    Science.gov (United States)

    Müller-Hansen, Finn; Schlüter, Maja; Mäs, Michael; Donges, Jonathan F.; Kolb, Jakob J.; Thonicke, Kirsten; Heitzig, Jobst

    2017-11-01

    Today, humans have a critical impact on the Earth system and vice versa, which can generate complex feedback processes between social and ecological dynamics. Integrating human behavior into formal Earth system models (ESMs), however, requires crucial modeling assumptions about actors and their goals, behavioral options, and decision rules, as well as modeling decisions regarding human social interactions and the aggregation of individuals' behavior. Here, we review existing modeling approaches and techniques from various disciplines and schools of thought dealing with human behavior at different levels of decision making. We demonstrate modelers' often vast degrees of freedom but also seek to make modelers aware of the often crucial consequences of seemingly innocent modeling assumptions. After discussing which socioeconomic units are potentially important for ESMs, we compare models of individual decision making that correspond to alternative behavioral theories and that make diverse modeling assumptions about individuals' preferences, beliefs, decision rules, and foresight. We review approaches to model social interaction, covering game theoretic frameworks, models of social influence, and network models. Finally, we discuss approaches to studying how the behavior of individuals, groups, and organizations can aggregate to complex collective phenomena, discussing agent-based, statistical, and representative-agent modeling and economic macro-dynamics. We illustrate the main ingredients of modeling techniques with examples from land-use dynamics as one of the main drivers of environmental change bridging local to global scales.

  12. Biodiversity mediates top-down control in eelgrass ecosystems: a global comparative-experimental approach.

    Science.gov (United States)

    Duffy, J Emmett; Reynolds, Pamela L; Boström, Christoffer; Coyer, James A; Cusson, Mathieu; Donadi, Serena; Douglass, James G; Eklöf, Johan S; Engelen, Aschwin H; Eriksson, Britas Klemens; Fredriksen, Stein; Gamfeldt, Lars; Gustafsson, Camilla; Hoarau, Galice; Hori, Masakazu; Hovel, Kevin; Iken, Katrin; Lefcheck, Jonathan S; Moksnes, Per-Olav; Nakaoka, Masahiro; O'Connor, Mary I; Olsen, Jeanine L; Richardson, J Paul; Ruesink, Jennifer L; Sotka, Erik E; Thormar, Jonas; Whalen, Matthew A; Stachowicz, John J

    2015-07-01

    Nutrient pollution and reduced grazing each can stimulate algal blooms as shown by numerous experiments. But because experiments rarely incorporate natural variation in environmental factors and biodiversity, conditions determining the relative strength of bottom-up and top-down forcing remain unresolved. We factorially added nutrients and reduced grazing at 15 sites across the range of the marine foundation species eelgrass (Zostera marina) to quantify how top-down and bottom-up control interact with natural gradients in biodiversity and environmental forcing. Experiments confirmed modest top-down control of algae, whereas fertilisation had no general effect. Unexpectedly, grazer and algal biomass were better predicted by cross-site variation in grazer and eelgrass diversity than by global environmental gradients. Moreover, these large-scale patterns corresponded strikingly with prior small-scale experiments. Our results link global and local evidence that biodiversity and top-down control strongly influence functioning of threatened seagrass ecosystems, and suggest that biodiversity is comparably important to global change stressors. © 2015 John Wiley & Sons Ltd/CNRS.

  13. A top-down approach for the prediction of hardness and toughness of hierarchical materials

    International Nuclear Information System (INIS)

    Carpinteri, Alberto; Paggi, Marco

    2009-01-01

    Many natural and man-made materials exhibit structure over more than one length scale. In this paper, we deal with hierarchical grained composite materials that have recently been designed to achieve superior hardness and toughness as compared to their traditional counterparts. Their nested structure, where meso-grains are recursively composed of smaller and smaller micro-grains at the different scales with a fractal-like topology, is herein studied from a hierarchical perspective. Considering a top-down approach, i.e. from the largest to the smallest scale, we propose a recursive micromechanical model coupled with a generalized fractal mixture rule for the prediction of hardness and toughness of a grained material with n hierarchical levels. A relationship between hardness and toughness is also derived and the analytical predictions are compared with experimental data.

  14. Analysis of top-down and bottom-up North American CO2 and CH4 emissions estimates in the second State of the Carbon Cycle Report

    Science.gov (United States)

    Miller, J. B.; Jacobson, A. R.; Bruhwiler, L.; Michalak, A.; Hayes, D. J.; Vargas, R.

    2017-12-01

    In just ten years since publication of the original State of the Carbon Cycle Report in 2007, global CO2 concentrations have risen by more than 22 ppm to 405 ppm. This represents 18% of the increase over preindustrial levels of 280 ppm. This increase is being driven unequivocally by fossil fuel combustion with North American emissions comprising roughly 20% of the global total over the past decade. At the global scale, we know by comparing well-known fossil fuel inventories and rates of atmospheric CO2 increase that about half of all emissions are absorbed at Earth's surface. For North America, however, we can not apply a simple mass balance to determine sources and sinks. Instead, contributions from ecosystems must be estimated using top-down and bottom-up methods. SOCCR-2 estimates North American net CO2 uptake from ecosystems using bottom-up (inventory) methods as 577 +/- 433 TgC/yr and 634 +/- 288 TgC/yr from top-down atmospheric inversions. Although the global terrestrial carbon sink is not precisely known, these values represent possibly 30% of the global values. As with net sink estimates reported in SOCCR, these new top-down and bottom-up estimates are statistically consistent with one another. However, the uncertainties on each of these estimates are now substantially smaller, giving us more confidence about where the truth lies. Atmospheric inversions also yield estimates of interannual variations (IAV) in CO2 and CH4 fluxes. Our syntheses suggest that IAV of ecosystem CO2 fluxes is of order 100 TgC/yr, mainly originating in the conterminous US, with lower variability in boreal and arctic regions. Moreover, this variability is much larger than for inventory-based fluxes reported by the US to the UNFCCC. Unlike CO2, bottom-up CH4 emissions are larger than those derived from large-scale atmospheric data, with the continental discrepancy resulting primarily from differences in arctic and boreal regions. In addition to the current state of the science, we

  15. What are the fluxes of greenhouse gases from the greater Los Angeles area as inferred from top-down remote sensing studies?

    Science.gov (United States)

    Hedelius, J.; Wennberg, P. O.; Wunch, D.; Roehl, C. M.; Podolske, J. R.; Hillyard, P.; Iraci, L. T.

    2017-12-01

    Greenhouse gas (GHG) emissions from California's South Coast Air Basin (SoCAB) have been studied extensively using a variety of tower, aircraft, remote sensing, emission inventory, and modeling studies. It is impractical to survey GHG fluxes from all urban areas and hot-spots to the extent the SoCAB has been studied, but it can serve as a test location for scaling methods globally. We use a combination of remote sensing measurements from ground (Total Carbon Column Observing Network, TCCON) and space-based (Observing Carbon Observatory-2, OCO-2) sensors in an inversion to obtain the carbon dioxide flux from the SoCAB. We also perform a variety of sensitivity tests to see how the inversion performs using different model parameterizations. Fluxes do not significantly depend on the mixed layer depth, but are sensitive to the model surface layers (top-down than bottom-up fluxes highlight the need for additional work on both approaches. Higher top-down fluxes could arise from sampling bias, model bias, or may show bottom-up values underestimate sources. Lessons learned here may help in scaling up inversions to hundreds of urban systems using space-based observations.

  16. A Multi-Scale Perspective of the Effects of Forest Fragmentation on Birds in Eastern Forests

    Science.gov (United States)

    Frank R. Thompson; Therese M. Donovan; Richard M. DeGraff; John Faaborg; Scott K. Robinson

    2002-01-01

    We propose a model that considers forest fragmentation within a spatial hierarchy that includes regional or biogeographic effects, landscape-level fragmentation effects, and local habitat effects. We hypothesize that effects operate "top down" in that larger scale effects provide constraints or context for smaller scale effects. Bird species' abundance...

  17. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    Science.gov (United States)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  18. GaAs/Ge crystals grown on Si substrates patterned down to the micron scale

    International Nuclear Information System (INIS)

    Taboada, A. G.; Kreiliger, T.; Falub, C. V.; Känel, H. von; Meduňa, M.; Salvalaglio, M.; Miglio, L.; Isa, F.; Barthazy Meier, E.; Müller, E.; Isella, G.

    2016-01-01

    Monolithic integration of III-V compounds into high density Si integrated circuits is a key technological challenge for the next generation of optoelectronic devices. In this work, we report on the metal organic vapor phase epitaxy growth of strain-free GaAs crystals on Si substrates patterned down to the micron scale. The differences in thermal expansion coefficient and lattice parameter are adapted by a 2-μm-thick intermediate Ge layer grown by low-energy plasma enhanced chemical vapor deposition. The GaAs crystals evolve during growth towards a pyramidal shape, with lateral facets composed of (111) planes and an apex formed by (137) and (001) surfaces. The influence of the anisotropic GaAs growth kinetics on the final morphology is highlighted by means of scanning and transmission electron microscopy measurements. The effect of the Si pattern geometry, substrate orientation, and crystal aspect ratio on the GaAs structural properties was investigated by means of high resolution X-ray diffraction. The thermal strain relaxation process of GaAs crystals with different aspect ratio is discussed within the framework of linear elasticity theory by Finite Element Method simulations based on realistic geometries extracted from cross-sectional scanning electron microscopy images

  19. Pushing down the low-mass halo concentration frontier with the Lomonosov cosmological simulations

    Science.gov (United States)

    Pilipenko, Sergey V.; Sánchez-Conde, Miguel A.; Prada, Francisco; Yepes, Gustavo

    2017-12-01

    We introduce the Lomonosov suite of high-resolution N-body cosmological simulations covering a full box of size 32 h-1 Mpc with low-mass resolution particles (2 × 107 h-1 M⊙) and three zoom-in simulations of overdense, underdense and mean density regions at much higher particle resolution (4 × 104 h-1 M⊙). The main purpose of this simulation suite is to extend the concentration-mass relation of dark matter haloes down to masses below those typically available in large cosmological simulations. The three different density regions available at higher resolution provide a better understanding of the effect of the local environment on halo concentration, known to be potentially important for small simulation boxes and small halo masses. Yet, we find the correction to be small in comparison with the scatter of halo concentrations. We conclude that zoom simulations, despite their limited representativity of the volume of the Universe, can be effectively used for the measurement of halo concentrations at least at the halo masses probed by our simulations. In any case, after a precise characterization of this effect, we develop a robust technique to extrapolate the concentration values found in zoom simulations to larger volumes with greater accuracy. Altogether, Lomonosov provides a measure of the concentration-mass relation in the halo mass range 107-1010 h-1 M⊙ with superb halo statistics. This work represents a first important step to measure halo concentrations at intermediate, yet vastly unexplored halo mass scales, down to the smallest ones. All Lomonosov data and files are public for community's use.

  20. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  1. Qualitative and quantitative examination of the performance of regional air quality models representing different modeling approaches

    International Nuclear Information System (INIS)

    Bhumralkar, C.M.; Ludwig, F.L.; Shannon, J.D.; McNaughton, D.

    1985-04-01

    The calculations of three different air quality models were compared with the best available observations. The comparisons were made without calibrating the models to improve agreement with the observations. Model performance was poor for short averaging times (less than 24 hours). Some of the poor performance can be traced to errors in the input meteorological fields, but error exist on all levels. It should be noted that these models were not originally designed for treating short-term episodes. For short-term episodes, much of the variance in the data can arise from small spatial scale features that tend to be averaged out over longer periods. These small spatial scale features cannot be resolved with the coarse grids that are used for the meteorological and emissions inputs. Thus, it is not surprising that the models performed for the longer averaging times. The models compared were RTM-II, ENAMAP-2 and ACID. (17 refs., 5 figs., 4 tabs

  2. Electrochromic Radiator Coupon Level Testing and Full Scale Thermal Math Modeling for Use on Altair Lunar Lander

    Science.gov (United States)

    Bannon, Erika T.; Bower, Chad E.; Sheth, Rubik; Stephan, Ryan

    2010-01-01

    In order to control system and component temperatures, many spacecraft thermal control systems use a radiator coupled with a pumped fluid loop to reject waste heat from the vehicle. Since heat loads and radiation environments can vary considerably according to mission phase, the thermal control system must be able to vary the heat rejection. The ability to "turn down" the heat rejected from the thermal control system is critically important when designing the system. Electrochromic technology as a radiator coating is being investigated to vary the amount of heat rejected by a radiator. Coupon level tests were performed to test the feasibility of this technology. Furthermore, thermal math models were developed to better understand the turndown ratios required by full scale radiator architectures to handle the various operation scenarios encountered during a mission profile for the Altair Lunar Lander. This paper summarizes results from coupon level tests as well as the thermal math models developed to investigate how electrochromics can be used to increase turn down ratios for a radiator. Data from the various design concepts of radiators and their architectures are outlined. Recommendations are made on which electrochromic radiator concept should be carried further for future thermal vacuum testing.

  3. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  4. Land-Atmosphere Coupling in the Multi-Scale Modelling Framework

    Science.gov (United States)

    Kraus, P. M.; Denning, S.

    2015-12-01

    The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced

  5. Irregular dynamics in up and down cortical states.

    Directory of Open Access Journals (Sweden)

    Jorge F Mejias

    Full Text Available Complex coherent dynamics is present in a wide variety of neural systems. A typical example is the voltage transitions between up and down states observed in cortical areas in the brain. In this work, we study this phenomenon via a biologically motivated stochastic model of up and down transitions. The model is constituted by a simple bistable rate dynamics, where the synaptic current is modulated by short-term synaptic processes which introduce stochasticity and temporal correlations. A complete analysis of our model, both with mean-field approaches and numerical simulations, shows the appearance of complex transitions between high (up and low (down neural activity states, driven by the synaptic noise, with permanence times in the up state distributed according to a power-law. We show that the experimentally observed large fluctuation in up and down permanence times can be explained as the result of sufficiently noisy dynamical synapses with sufficiently large recovery times. Static synapses cannot account for this behavior, nor can dynamical synapses in the absence of noise.

  6. Transdisciplinary application of the cross-scale resilience model

    Science.gov (United States)

    Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.

    2014-01-01

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.

  7. Flexible hydrological modeling - Disaggregation from lumped catchment scale to higher spatial resolutions

    Science.gov (United States)

    Tran, Quoc Quan; Willems, Patrick; Pannemans, Bart; Blanckaert, Joris; Pereira, Fernando; Nossent, Jiri; Cauwenberghs, Kris; Vansteenkiste, Thomas

    2015-04-01

    Based on an international literature review on model structures of existing rainfall-runoff and hydrological models, a generalized model structure is proposed. It consists of different types of meteorological components, storage components, splitting components and routing components. They can be spatially organized in a lumped way, or on a grid, spatially interlinked by source-to-sink or grid-to-grid (cell-to-cell) routing. The grid size of the model can be chosen depending on the application. The user can select/change the spatial resolution depending on the needs and/or the evaluation of the accuracy of the model results, or use different spatial resolutions in parallel for different applications. Major research questions addressed during the study are: How can we assure consistent results of the model at any spatial detail? How can we avoid strong or sudden changes in model parameters and corresponding simulation results, when one moves from one level of spatial detail to another? How can we limit the problem of overparameterization/equifinality when we move from the lumped model to the spatially distributed model? The proposed approach is a step-wise one, where first the lumped conceptual model is calibrated using a systematic, data-based approach, followed by a disaggregation step where the lumped parameters are disaggregated based on spatial catchment characteristics (topography, land use, soil characteristics). In this way, disaggregation can be done down to any spatial scale, and consistently among scales. Only few additional calibration parameters are introduced to scale the absolute spatial differences in model parameters, but keeping the relative differences as obtained from the spatial catchment characteristics. After calibration of the spatial model, the accuracies of the lumped and spatial models were compared for peak, low and cumulative runoff total and sub-flows (at downstream and internal gauging stations). For the distributed models, additional

  8. Diurnal Transcriptome and Gene Network Represented through Sparse Modeling in Brachypodium distachyon

    Directory of Open Access Journals (Sweden)

    Satoru Koda

    2017-11-01

    Full Text Available We report the comprehensive identification of periodic genes and their network inference, based on a gene co-expression analysis and an Auto-Regressive eXogenous (ARX model with a group smoothly clipped absolute deviation (SCAD method using a time-series transcriptome dataset in a model grass, Brachypodium distachyon. To reveal the diurnal changes in the transcriptome in B. distachyon, we performed RNA-seq analysis of its leaves sampled through a diurnal cycle of over 48 h at 4 h intervals using three biological replications, and identified 3,621 periodic genes through our wavelet analysis. The expression data are feasible to infer network sparsity based on ARX models. We found that genes involved in biological processes such as transcriptional regulation, protein degradation, and post-transcriptional modification and photosynthesis are significantly enriched in the periodic genes, suggesting that these processes might be regulated by circadian rhythm in B. distachyon. On the basis of the time-series expression patterns of the periodic genes, we constructed a chronological gene co-expression network and identified putative transcription factors encoding genes that might be involved in the time-specific regulatory transcriptional network. Moreover, we inferred a transcriptional network composed of the periodic genes in B. distachyon, aiming to identify genes associated with other genes through variable selection by grouping time points for each gene. Based on the ARX model with the group SCAD regularization using our time-series expression datasets of the periodic genes, we constructed gene networks and found that the networks represent typical scale-free structure. Our findings demonstrate that the diurnal changes in the transcriptome in B. distachyon leaves have a sparse network structure, demonstrating the spatiotemporal gene regulatory network over the cyclic phase transitions in B. distachyon diurnal growth.

  9. Uncertainties in modelling and scaling of critical flows and pump model in TRAC-PF1/MOD1

    International Nuclear Information System (INIS)

    Rohatgi, U.S.; Yu, Wen-Shi.

    1987-01-01

    The USNRC has established a Code Scalability, Applicability and Uncertainty (CSAU) evaluation methodology to quantify the uncertainty in the prediction of safety parameters by the best estimate codes. These codes can then be applied to evaluate the Emergency Core Cooling System (ECCS). The TRAC-PF1/MOD1 version was selected as the first code to undergo the CSAU analysis for LBLOCA applications. It was established through this methodology that break flow and pump models are among the top ranked models in the code affecting the peak clad temperature (PCT) prediction for LBLOCA. The break flow model bias or discrepancy and the uncertainty were determined by modelling the test section near the break for 12 Marviken tests. It was observed that the TRAC-PF1/MOD1 code consistently underpredicts the break flow rate and that the prediction improved with increasing pipe length (larger L/D). This is true for both subcooled and two-phase critical flows. A pump model was developed from Westinghouse (1/3 scale) data. The data represent the largest available test pump relevant to Westinghouse PWRs. It was then shown through the analysis of CE and CREARE pump data that larger pumps degrade less and also that pumps degrade less at higher pressures. Since the model developed here is based on the 1/3 scale pump and on low pressure data, it is conservative and will overpredict the degradation when applied to PWRs

  10. Comments on intermediate-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.

    1987-04-23

    Some superstring-inspired models employ intermediate scales m/sub I/ of gauge symmetry breaking. Such scales should exceed 10/sup 16/ GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m/sub I/. However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m/sub W/), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m/sub I/. We also comment on the danger of baryon and lepton number violation in the effective low-energy theory.

  11. A RELIABILITY TEST USED FOR THE DEVELOPMENT OF A LOYALTY SCALE

    Directory of Open Access Journals (Sweden)

    Florin-Alexandru LUCA

    2017-06-01

    Full Text Available The development of a loyalty model involves the construction of a proper research instrument. For the loyalty model of the clients for financial services, the pre-testing of the research questionnaire represents a significant stage. This article presents the methodology used in this stage for testing the reliability of a loyalty scale. Firstly, this implies choosing the appropriate scales for each variable included in the suggested research model. Secondly, the internal consistency for each of these scales is measured as an indicator of their reliability. The reliability analysis described represents an essential stage in building a measurement instrument for a loyalty model.

  12. Efficient and robust model-to-image alignment using 3D scale-invariant features.

    Science.gov (United States)

    Toews, Matthew; Wells, William M

    2013-04-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  14. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  15. Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications

    DEFF Research Database (Denmark)

    Liu, Yubao; Warner, Tom; Liu, Yuewei

    2011-01-01

    This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center f...

  16. Down Syndrome

    Science.gov (United States)

    ... Down syndrome increases as a woman gets older. Down syndrome cannot be cured. Early treatment programs can help improve skills. They may include ... occupational, and/or educational therapy. With support and treatment, many ... Down syndrome live happy, productive lives. NIH: National Institute of ...

  17. The Scaling LInear Macroweather model (SLIM): using scaling to forecast global scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-03-01

    At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare

  18. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    Energy Technology Data Exchange (ETDEWEB)

    RODRIGUEZ, MARKO A. [Los Alamos National Laboratory; BOLLEN, JOHAN [Los Alamos National Laboratory; VAN DE SOMPEL, HERBERT [Los Alamos National Laboratory

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  19. Analysis of chromosome aberration data by hybrid-scale models

    International Nuclear Information System (INIS)

    Indrawati, Iwiq; Kumazawa, Shigeru

    2000-02-01

    This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)

  20. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  1. Modeling and validation of on-road CO2 emissions inventories at the urban regional scale

    International Nuclear Information System (INIS)

    Brondfield, Max N.; Hutyra, Lucy R.; Gately, Conor K.; Raciti, Steve M.; Peterson, Scott A.

    2012-01-01

    On-road emissions are a major contributor to rising concentrations of atmospheric greenhouse gases. In this study, we applied a downscaling methodology based on commonly available spatial parameters to model on-road CO 2 emissions at the 1 × 1 km scale for the Boston, MA region and tested our approach with surface-level CO 2 observations. Using two previously constructed emissions inventories with differing spatial patterns and underlying data sources, we developed regression models based on impervious surface area and volume-weighted road density that could be scaled to any resolution. We found that the models accurately reflected the inventories at their original scales (R 2 = 0.63 for both models) and exhibited a strong relationship with observed CO 2 mixing ratios when downscaled across the region. Moreover, the improved spatial agreement of the models over the original inventories confirmed that either product represents a viable basis for downscaling in other metropolitan regions, even with limited data. - Highlights: ► We model two on-road CO 2 emissions inventories using common spatial parameters. ► Independent CO 2 observations are used to validate the emissions models. ► The downscaled emissions models capture the urban spatial heterogeneity of Boston. ► Emissions estimates show a strong non-linear relationship with observed CO 2 . ► Our study is repeatable, even in areas with limited data. - This work presents a new, reproducible methodology for downscaling and validating on-road CO 2 emissions estimates.

  2. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

    CSIR Research Space (South Africa)

    Engelbrecht, FA

    2011-12-01

    Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...

  3. A Physiologically Based, Multi-Scale Model of Skeletal Muscle Structure and Function

    Science.gov (United States)

    Röhrle, O.; Davidson, J. B.; Pullan, A. J.

    2012-01-01

    Models of skeletal muscle can be classified as phenomenological or biophysical. Phenomenological models predict the muscle’s response to a specified input based on experimental measurements. Prominent phenomenological models are the Hill-type muscle models, which have been incorporated into rigid-body modeling frameworks, and three-dimensional continuum-mechanical models. Biophysically based models attempt to predict the muscle’s response as emerging from the underlying physiology of the system. In this contribution, the conventional biophysically based modeling methodology is extended to include several structural and functional characteristics of skeletal muscle. The result is a physiologically based, multi-scale skeletal muscle finite element model that is capable of representing detailed, geometrical descriptions of skeletal muscle fibers and their grouping. Together with a well-established model of motor-unit recruitment, the electro-physiological behavior of single muscle fibers within motor units is computed and linked to a continuum-mechanical constitutive law. The bridging between the cellular level and the organ level has been achieved via a multi-scale constitutive law and homogenization. The effect of homogenization has been investigated by varying the number of embedded skeletal muscle fibers and/or motor units and computing the resulting exerted muscle forces while applying the same excitatory input. All simulations were conducted using an anatomically realistic finite element model of the tibialis anterior muscle. Given the fact that the underlying electro-physiological cellular muscle model is capable of modeling metabolic fatigue effects such as potassium accumulation in the T-tubular space and inorganic phosphate build-up, the proposed framework provides a novel simulation-based way to investigate muscle behavior ranging from motor-unit recruitment to force generation and fatigue. PMID:22993509

  4. A physiologically based, multi-scale model of skeletal muscle structure and function

    Directory of Open Access Journals (Sweden)

    Oliver eRöhrle

    2012-09-01

    Full Text Available Models of skeletal muscle can be classified as phenomenological or biophysical. Phenomenological models predict the muscle's response to a specified input based on experimental measurements. Prominent phenomenological models are the Hill-type muscle models, which have been incorporated into rigid-body modelling frameworks, and three-dimensional continuum-mechanical models. Biophysically based models attempt to predict the muscle's response as emerging from the underlying physiology of the system. In this contribution, the conventional biophysically based modelling methodology is extended to include several structural and functional characteristics of skeletal muscle. The result is a physiologically based, multi-scale skeletal muscle finite element model that is capable of representing detailed, geometrical descriptions of skeletal muscle fibres and their grouping. Together with a well-established model of motor unit recruitment, the electro-physiological behaviour of single muscle fibres within motor units is computed and linked to a continuum-mechanical constitutive law. The bridging between the cellular level and the organ level has been achieved via a multi-scale constitutive law and homogenisation. The effect of homogenisation has been investigated by varying the number of embedded skeletal muscle fibres and/or motor units and computing the resulting exerted muscle forces while applying the same excitatory input. All simulations were conducted using an anatomically realistic finite element model of the Tibialis Anterior muscle. Given the fact that the underlying electro-physiological cellular muscle model is capable of modelling metabolic fatigue effects such as potassium accumulation in the T-tubular space and inorganic phosphate build-up, the proposed framework provides a novel simulation-based way to investigate muscle behaviour ranging from motor unit recruitment to force generation and fatigue.

  5. Adult siblings of individuals with Down syndrome versus with autism: findings from a large-scale US survey.

    Science.gov (United States)

    Hodapp, R M; Urbano, R C

    2007-12-01

    As adults with Down syndrome live increasingly longer lives, their adult siblings will most likely assume caregiving responsibilities. Yet little is known about either the sibling relationship or the general functioning of these adult siblings. Using a national, web-based survey, this study compared adult siblings of individuals with Down syndrome to siblings of individuals with autism in terms of a potential 'Down syndrome advantage' and changes across age of the brother/sister with disabilities. Two groups were examined, siblings of persons with Down syndrome (n = 284) and with autism (n = 176). The Adult Sibling Questionnaire measured the number and length of contacts between siblings and their brothers/sisters with disabilities; the warmth, closeness and positiveness of the sibling relationship; and the sibling's overall levels of perceived health, depression and rewards of being a sibling. Compared with siblings of brothers/sisters with autism, siblings of brothers/sisters with Down syndrome showed closer, warmer sibling relationships, along with slightly better health, lower levels of depressive symptoms and more contacts. Across age groups of the brother/sister with disabilities, both groups showed lessened contacts, with less close sibling relationships occurring when brothers/sisters with disabilities were aged 30-44 years and 45 years and older (in Down syndrome) and 45 years and older (in autism). Within both groups, closer sibling relationships were associated with more frequent and lengthy contacts, brothers/sisters with disabilities who were better at maintaining friendships and had lower levels of behavioural/emotional problems, and siblings who felt themselves more rewarded by being a sibling to a brother/sister with disabilities. In line with earlier work on families of children with disabilities, this study shows an advantage for siblings of adults with Down syndrome, in terms of both sibling relationships and of slightly better health and

  6. Neurophysiological bases of exponential sensory decay and top-down memory retrieval: a model.

    Science.gov (United States)

    Zylberberg, Ariel; Dehaene, Stanislas; Mindlin, Gabriel B; Sigman, Mariano

    2009-01-01

    Behavioral observations suggest that multiple sensory elements can be maintained for a short time, forming a perceptual buffer which fades after a few hundred milliseconds. Only a subset of this perceptual buffer can be accessed under top-down control and broadcasted to working memory and consciousness. In turn, single-cell studies in awake-behaving monkeys have identified two distinct waves of response to a sensory stimulus: a first transient response largely determined by stimulus properties and a second wave dependent on behavioral relevance, context and learning. Here we propose a simple biophysical scheme which bridges these observations and establishes concrete predictions for neurophsyiological experiments in which the temporal interval between stimulus presentation and top-down allocation is controlled experimentally. Inspired in single-cell observations, the model involves a first transient response and a second stage of amplification and retrieval, which are implemented biophysically by distinct operational modes of the same circuit, regulated by external currents. We explicitly investigated the neuronal dynamics, the memory trace of a presented stimulus and the probability of correct retrieval, when these two stages were bracketed by a temporal gap. The model predicts correctly the dependence of performance with response times in interference experiments suggesting that sensory buffering does not require a specific dedicated mechanism and establishing a direct link between biophysical manipulations and behavioral observations leading to concrete predictions.

  7. Neurophysiological bases of exponential sensory decay and top-down memory retrieval: a model

    Directory of Open Access Journals (Sweden)

    Ariel Zylberberg

    2009-03-01

    Full Text Available Behavioral observations suggest that multiple sensory elements can be maintained for a short time, forming a perceptual buffer which fades after a few hundred milliseconds. Only a subset of this perceptual buffer can be accessed under top-down control and broadcasted to working memory and consciousness. In turn, single-cell studies in awake-behaving monkeys have identified two distinct waves of response to a sensory stimulus: a first transient response largely determined by stimulus properties and a second wave dependent on behavioral relevance, context and learning. Here we propose a simple biophysical scheme which bridges these observations and establishes concrete predictions for neurophsyiological experiments in which the temporal interval between stimulus presentation and top-down allocation is controlled experimentally. Inspired in single-cell observations, the model involves a first transient response and a second stage of amplification and retrieval, which are implemented biophysically by distinct operational modes of the same circuit, regulated by external currents. We explicitly investigated the neuronal dynamics, the memory trace of a presented stimulus and the probability of correct retrieval, when these two stages were bracketed by a temporal gap. The model predicts correctly the dependence of performance with response times in interference experiments suggesting that sensory buffering does not require a specific dedicated mechanism and establishing a direct link between biophysical manipulations and behavioral observations leading to concrete predictions.

  8. Tuning magnetotransport in a compensated semimetal at the atomic scale

    Science.gov (United States)

    Wang, Lin; Gutiérrez-Lezama, Ignacio; Barreteau, Céline; Ubrig, Nicolas; Giannini, Enrico; Morpurgo, Alberto F.

    2015-11-01

    Either in bulk form, or in atomically thin crystals, layered transition metal dichalcogenides continuously reveal new phenomena. The latest example is 1T'-WTe2, a semimetal found to exhibit the largest known magnetoresistance in the bulk, and predicted to become a topological insulator in strained monolayers. Here we show that reducing the thickness through exfoliation enables the electronic properties of WTe2 to be tuned, which allows us to identify the mechanisms responsible for the observed magnetotransport down to the atomic scale. The longitudinal resistance and the unconventional magnetic field dependence of the Hall resistance are reproduced quantitatively by a classical two-band model for crystals as thin as six monolayers, whereas a crossover to an Anderson insulator occurs for thinner crystals. Besides establishing the origin of the magnetoresistance of WTe2, our results represent a complete validation of the classical theory for two-band electron-hole transport, and indicate that atomically thin WTe2 layers remain gapless semimetals.

  9. Biointerface dynamics--Multi scale modeling considerations.

    Science.gov (United States)

    Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko

    2015-08-01

    Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. High-Resolution Assimilation of GRACE Terrestrial Water Storage Observations to Represent Local-Scale Water Table Depths

    Science.gov (United States)

    Stampoulis, D.; Reager, J. T., II; David, C. H.; Famiglietti, J. S.; Andreadis, K.

    2017-12-01

    Despite the numerous advances in hydrologic modeling and improvements in Land Surface Models, an accurate representation of the water table depth (WTD) still does not exist. Data assimilation of observations of the joint NASA and DLR mission, Gravity Recovery and Climate Experiment (GRACE) leads to statistically significant improvements in the accuracy of hydrologic models, ultimately resulting in more reliable estimates of water storage. However, the usually shallow groundwater compartment of the models presents a problem with GRACE assimilation techniques, as these satellite observations account for much deeper aquifers. To improve the accuracy of groundwater estimates and allow the representation of the WTD at fine spatial scales we implemented a novel approach that enables a large-scale data integration system to assimilate GRACE data. This was achieved by augmenting the Variable Infiltration Capacity (VIC) hydrologic model, which is the core component of the Regional Hydrologic Extremes Assessment System (RHEAS), a high-resolution modeling framework developed at the Jet Propulsion Laboratory (JPL) for hydrologic modeling and data assimilation. The model has insufficient subsurface characterization and therefore, to reproduce groundwater variability not only in shallow depths but also in deep aquifers, as well as to allow GRACE assimilation, a fourth soil layer of varying depth ( 1000 meters) was added in VIC as the bottom layer. To initialize a water table in the model we used gridded global WTD data at 1 km resolution which were spatially aggregated to match the model's resolution. Simulations were then performed to test the augmented model's ability to capture seasonal and inter-annual trends of groundwater. The 4-layer version of VIC was run with and without assimilating GRACE Total Water Storage anomalies (TWSA) over the Central Valley in California. This is the first-ever assimilation of GRACE TWSA for the determination of realistic water table depths, at

  11. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  12. Genome-scale analysis of aberrant DNA methylation in colorectal cancer

    Science.gov (United States)

    Hinoue, Toshinori; Weisenberger, Daniel J.; Lange, Christopher P.E.; Shen, Hui; Byun, Hyang-Min; Van Den Berg, David; Malik, Simeen; Pan, Fei; Noushmehr, Houtan; van Dijk, Cornelis M.; Tollenaar, Rob A.E.M.; Laird, Peter W.

    2012-01-01

    Colorectal cancer (CRC) is a heterogeneous disease in which unique subtypes are characterized by distinct genetic and epigenetic alterations. Here we performed comprehensive genome-scale DNA methylation profiling of 125 colorectal tumors and 29 adjacent normal tissues. We identified four DNA methylation–based subgroups of CRC using model-based cluster analyses. Each subtype shows characteristic genetic and clinical features, indicating that they represent biologically distinct subgroups. A CIMP-high (CIMP-H) subgroup, which exhibits an exceptionally high frequency of cancer-specific DNA hypermethylation, is strongly associated with MLH1 DNA hypermethylation and the BRAFV600E mutation. A CIMP-low (CIMP-L) subgroup is enriched for KRAS mutations and characterized by DNA hypermethylation of a subset of CIMP-H-associated markers rather than a unique group of CpG islands. Non-CIMP tumors are separated into two distinct clusters. One non-CIMP subgroup is distinguished by a significantly higher frequency of TP53 mutations and frequent occurrence in the distal colon, while the tumors that belong to the fourth group exhibit a low frequency of both cancer-specific DNA hypermethylation and gene mutations and are significantly enriched for rectal tumors. Furthermore, we identified 112 genes that were down-regulated more than twofold in CIMP-H tumors together with promoter DNA hypermethylation. These represent ∼7% of genes that acquired promoter DNA methylation in CIMP-H tumors. Intriguingly, 48/112 genes were also transcriptionally down-regulated in non-CIMP subgroups, but this was not attributable to promoter DNA hypermethylation. Together, we identified four distinct DNA methylation subgroups of CRC and provided novel insight regarding the role of CIMP-specific DNA hypermethylation in gene silencing. PMID:21659424

  13. Modeling of urban atmospheric pollution and impact on health

    International Nuclear Information System (INIS)

    Myrto, Valari

    2009-10-01

    The goal of this dissertation, is to develop a methodology that provides an improved knowledge of the associations between atmospheric contaminant concentrations and health impact. The propagation of uncertainties from input data to the output concentrations through a Chemistry Transport Model was first studied. The influence of the resolutions of meteorological parameters and emissions data were studied separately, and their relative role was compared. It was found that model results do not improve linearly with the resolution of emission input. A critical resolution was found, beyond which model error becomes higher and the model breaks down. Based on this first investigation concerning the direct down scaling, further research focused on sub grid scale modeling. Thus, a statistical down scaling approach was adopted for the modeling of sub grid-scale concentration variability due to heterogeneous surface emissions. Emission fractions released from different types of sources (industry, roads, residential, natural etc.) were calculated from a high-resolution emission inventory. Then emission fluxes were mapped on surfaces emitting source-specific species. Simulations were run independently over the defined micro-environments allowing the modeling of sub grid-scale concentration variability. Sub grid scale concentrations were therefore combined with demographic and human activity data to provide exposure estimates. The spatial distribution of human exposure was parameterized through a Monte-Carlo model. The new information concerning exposure variability was added to an existing epidemiological model to study relative health risks. A log-linear Poisson regression model was used for this purpose. The principal outcome of the investigation was that a new functionality was added to the regression model which allows the dissociation of the health risk associated with each pollutant (e.g. NO 2 and PM 2.5 ). (author)

  14. Genome-scale biological models for industrial microbial systems.

    Science.gov (United States)

    Xu, Nan; Ye, Chao; Liu, Liming

    2018-04-01

    The primary aims and challenges associated with microbial fermentation include achieving faster cell growth, higher productivity, and more robust production processes. Genome-scale biological models, predicting the formation of an interaction among genetic materials, enzymes, and metabolites, constitute a systematic and comprehensive platform to analyze and optimize the microbial growth and production of biological products. Genome-scale biological models can help optimize microbial growth-associated traits by simulating biomass formation, predicting growth rates, and identifying the requirements for cell growth. With regard to microbial product biosynthesis, genome-scale biological models can be used to design product biosynthetic pathways, accelerate production efficiency, and reduce metabolic side effects, leading to improved production performance. The present review discusses the development of microbial genome-scale biological models since their emergence and emphasizes their pertinent application in improving industrial microbial fermentation of biological products.

  15. Assessment of Climate Change Impacts on Water Resources in Three Representative Ukrainian Catchments Using Eco-Hydrological Modelling

    Directory of Open Access Journals (Sweden)

    Iulii Didovets

    2017-03-01

    Full Text Available The information about climate change impact on river discharge is vitally important for planning adaptation measures. The future changes can affect different water-related sectors. The main goal of this study was to investigate the potential water resource changes in Ukraine, focusing on three mesoscale river catchments (Teteriv, Upper Western Bug, and Samara characteristic for different geographical zones. The catchment scale watershed model—Soil and Water Integrated Model (SWIM—was setup, calibrated, and validated for the three catchments under consideration. A set of seven GCM-RCM (General Circulation Model-Regional Climate Model coupled climate scenarios corresponding to RCPs (Representative Concentration Pathways 4.5 and 8.5 were used to drive the hydrological catchment model. The climate projections, used in the study, were considered as three combinations of low, intermediate, and high end scenarios. Our results indicate the shifts in the seasonal distribution of runoff in all three catchments. The spring high flow occurs earlier as a result of temperature increases and earlier snowmelt. The fairly robust trend is an increase in river discharge in the winter season, and most of the scenarios show a potential decrease in river discharge in the spring.

  16. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  17. Parameterization of cirrus microphysical and radiative properties in larger-scale models

    International Nuclear Information System (INIS)

    Heymsfield, A.J.; Coen, J.L.

    1994-01-01

    This study exploits measurements in clouds sampled during several field programs to develop and validate parameterizations that represent the physical and radiative properties of convectively generated cirrus clouds in intermediate and large-scale models. The focus is on cirrus anvils because they occur frequently, cover large areas, and play a large role in the radiation budget. Preliminary work focuses on understanding the microphysical, radiative, and dynamical processes that occur in these clouds. A detailed microphysical package has been constructed that considers the growth of the following hydrometer types: water drops, needles, plates, dendrites, columns, bullet rosettes, aggregates, graupel, and hail. Particle growth processes include diffusional and accretional growth, aggregation, sedimentation, and melting. This package is being implemented in a simple dynamical model that tracks the evolution and dispersion of hydrometers in a stratiform anvil cloud. Given the momentum, vapor, and ice fluxes into the stratiform region and the temperature and humidity structure in the anvil's environment, this model will suggest anvil properties and structure

  18. Project M: Scale Model of Lunar Landing Site of Apollo 17

    Science.gov (United States)

    O'Brien, Hollie; Crain, Timothy P.

    2010-01-01

    The basis of the project was creating a scale model representation of the Apollo 17 lunar landing site. Vital components included surface slope characteristics, crater sizes and locations, prominent rocks, and lighting conditions. The model was made for Project M support when evaluating approach and terminal descent as well as when planning surface operations with respect to the terrain. The project had five main mi lestones during the length of the project. The first was examining the best method to use to re-create the Apollo 17 landing site and then reviewing research fmdings with Dr. Tim Crain and EO staff which occurred on June 25, 2010 at a meeting. The second step was formulating a construction plan, budget, and schedule and then presenting the plan for authority to proceed which occurred on July 6,2010. The third part was building a prototype to test materials and building processes which were completed by July 13, 2010. Next was assembling the landing site model and presenting a mid-term construction status report on July 29, 2010. The fifth and final milestone was demonstrating the model and presenting an exit pitch which happened on August 4, 2010. The project was very technical: it needed a lot of research about moon topography, lighting conditions and angles of the sun on the moon, Apollo 17, and Autonomous Landing and Hazard Avoidance Technology (ALHAT), before starting the actual building process. This required using Spreadsheets, searching internet sources and conducting personal meetings with project representatives. This information assisted the interns in deciding the scale of the model with respect to cracks, craters and rocks and their relative sizes as the objects mentioned could interfere with any of the Lunar Landers: Apollo, Project M and future Landers. The project concluded with the completion of a three dimensional scale model of the Apollo 17 Lunar landing site. This model assists Project M members because they can now visualize

  19. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Science.gov (United States)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  20. Modelling the resilience of rail passenger transport networks affected by large-scale disruptive events : the case of HSR (high speed rail)

    NARCIS (Netherlands)

    Janic, M.

    2018-01-01

    This paper deals with modelling the dynamic resilience of rail passenger transport networks affected by large-scale disruptive events whose impacts deteriorate the networks’ planned infrastructural, operational, economic, and social-economic performances represented by the selected indicators.

  1. When supervisors perceive non-work support: test of a trickle-down model.

    Science.gov (United States)

    Wu, Tsung-Yu; Lee, Shao-Jen; Hu, Changya; Yang, Chun-Chi

    2014-01-01

    Using the trickle-down model as the theoretical foundation, we explored whether subordinates' perceived supervisory non-work support (subordinates' PSNS) mediates the relationship between supervisors' perception of higher-level managers' non-work support (supervisors' PSNS) and subordinates' organizational citizenship behaviors. Using dyadic data collected from 132 employees and their immediate supervisors, we found support for the aforementioned mediation process. Furthermore, supervisors' perceived in-group/out-group membership of subordinates moderated the aforementioned supervisors' PSNS-subordinates' PSNS relationship, such that this relationship is stronger for out-group subordinates. Theoretical and practical implications and future research directions are discussed.

  2. Post-Newtonian Dynamical Modeling of Supermassive Black Holes in Galactic-scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rantala, Antti; Pihajoki, Pauli; Johansson, Peter H.; Lahén, Natalia; Sawala, Till [Department of Physics, University of Helsinki, Gustaf Hällströmin katu 2a (Finland); Naab, Thorsten, E-mail: antti.rantala@helsinki.fi [Max-Planck-Insitut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85748, Garching (Germany)

    2017-05-01

    We present KETJU, a new extension of the widely used smoothed particle hydrodynamics simulation code GADGET-3. The key feature of the code is the inclusion of algorithmically regularized regions around every supermassive black hole (SMBH). This allows for simultaneously following global galactic-scale dynamical and astrophysical processes, while solving the dynamics of SMBHs, SMBH binaries, and surrounding stellar systems at subparsec scales. The KETJU code includes post-Newtonian terms in the equations of motions of the SMBHs, which enables a new SMBH merger criterion based on the gravitational wave coalescence timescale, pushing the merger separation of SMBHs down to ∼0.005 pc. We test the performance of our code by comparison to NBODY7 and rVINE. We set up dynamically stable multicomponent merger progenitor galaxies to study the SMBH binary evolution during galaxy mergers. In our simulation sample the SMBH binaries do not suffer from the final-parsec problem, which we attribute to the nonspherical shape of the merger remnants. For bulge-only models, the hardening rate decreases with increasing resolution, whereas for models that in addition include massive dark matter halos, the SMBH binary hardening rate becomes practically independent of the mass resolution of the stellar bulge. The SMBHs coalesce on average 200 Myr after the formation of the SMBH binary. However, small differences in the initial SMBH binary eccentricities can result in large differences in the SMBH coalescence times. Finally, we discuss the future prospects of KETJU, which allows for a straightforward inclusion of gas physics in the simulations.

  3. New phenomena in the standard no-scale supergravity model

    CERN Document Server

    Kelley, S; Nanopoulos, Dimitri V; Zichichi, Antonino; Kelley, S; Lopez, J L; Nanopoulos, D V; Zichichi, A

    1994-01-01

    We revisit the no-scale mechanism in the context of the simplest no-scale supergravity extension of the Standard Model. This model has the usual five-dimensional parameter space plus an additional parameter \\xi_{3/2}\\equiv m_{3/2}/m_{1/2}. We show how predictions of the model may be extracted over the whole parameter space. A necessary condition for the potential to be stable is {\\rm Str}{\\cal M}^4>0, which is satisfied if \\bf m_{3/2}\\lsim2 m_{\\tilde q}. Order of magnitude calculations reveal a no-lose theorem guaranteeing interesting and potentially observable new phenomena in the neutral scalar sector of the theory which would constitute a ``smoking gun'' of the no-scale mechanism. This new phenomenology is model-independent and divides into three scenarios, depending on the ratio of the weak scale to the vev at the minimum of the no-scale direction. We also calculate the residual vacuum energy at the unification scale (C_0\\, m^4_{3/2}), and find that in typical models one must require C_0>10. Such constrai...

  4. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  5. The ScaLIng Macroweather Model (SLIMM): using scaling to forecast global-scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-09-01

    On scales of ≈ 10 days (the lifetime of planetary-scale structures), there is a drastic transition from high-frequency weather to low-frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; thus, in GCM (general circulation model) macroweather forecasts, the weather is a high-frequency noise. However, neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developed that use empirical data to force the statistics and climate to be realistic so that even a two-parameter model can perform as well as GCMs for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the large stochastic memories that we quantify. Since macroweather temporal (but not spatial) intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the ScaLIng Macroweather Model (SLIMM). SLIMM is based on a stochastic ordinary differential equation, differing from usual linear stochastic models (such as the linear inverse modelling - LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes that there is no low-frequency memory, SLIMM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner, notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful stochastic forecasts of natural macroweather variability is to first remove the low-frequency anthropogenic component. A previous attempt to use fGn for forecasts had disappointing results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent

  6. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2016-05-01

    Full Text Available Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB.In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  7. Computational disease modeling – fact or fiction?

    Directory of Open Access Journals (Sweden)

    Stephan Klaas

    2009-06-01

    Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.

  8. Measurement of the Time Dependence of Neutron Slowing-Down and Therma in Heavy Water

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, E

    1966-03-15

    The behaviour of neutrons during their slowing-down and thermalization in heavy water has been followed on the time scale by measurements of the time-dependent rate of reaction between the flux and the three spectrum indicators indium, cadmium and gadolinium. The space dependence of the reaction rate curves has also been studied. The time-dependent density at 1.46 eV is well reproduced by a function, given by von Dardel, and a time for the maximum density of 7.1 {+-} 0.3 {mu}s has been obtained for this energy in deuterium gas in agreement with the theoretical value of 7.2 {mu}s. The spatial variation of this time is in accord with the calculations by Claesson. The slowing- down time to 0.2 eV has been found to be 16.3 {+-}2.4 {mu}s. The approach to the equilibrium spectrum takes place with a time constant of 33 {+-}4 {mu}s, and the equilibrium has been established after about 200 {mu}s. Comparison of the measured curves for cadmium and gadolinium with multigroup calculations of the time-dependent flux and reaction rate show the superiority of the scattering models for heavy water of Butler and of Brown and St. John over the mass 2 gas model. The experiment has been supplemented with Monte Carlo calculations of the slowing down time.

  9. Measurement of the Time Dependence of Neutron Slowing-Down and Therma in Heavy Water

    International Nuclear Information System (INIS)

    Moeller, E.

    1966-03-01

    The behaviour of neutrons during their slowing-down and thermalization in heavy water has been followed on the time scale by measurements of the time-dependent rate of reaction between the flux and the three spectrum indicators indium, cadmium and gadolinium. The space dependence of the reaction rate curves has also been studied. The time-dependent density at 1.46 eV is well reproduced by a function, given by von Dardel, and a time for the maximum density of 7.1 ± 0.3 μs has been obtained for this energy in deuterium gas in agreement with the theoretical value of 7.2 μs. The spatial variation of this time is in accord with the calculations by Claesson. The slowing- down time to 0.2 eV has been found to be 16.3 ±2.4 μs. The approach to the equilibrium spectrum takes place with a time constant of 33 ±4 μs, and the equilibrium has been established after about 200 μs. Comparison of the measured curves for cadmium and gadolinium with multigroup calculations of the time-dependent flux and reaction rate show the superiority of the scattering models for heavy water of Butler and of Brown and St. John over the mass 2 gas model. The experiment has been supplemented with Monte Carlo calculations of the slowing down time

  10. Qsub(N) approximation for slowing-down in fast reactors

    International Nuclear Information System (INIS)

    Rocca-Volmerange, Brigitte.

    1976-05-01

    An accurate and simple determination of the neutron energy spectra in fast reactors poses several problems. The slowing-down models (Fermi, Wigner, Goertzel-Greuling...) which are different forms of the approximation with order N=0 may prove inaccurate, in spite of recent improvements. A new method of approximation is presented which turns out to be a method of higher order: the Qsub(N) method. It is characterized by a rapid convergence with respect to the order N, by the use of some global parameters to represent the slowing-down and by the expression of the Boltzmann integral equation in a differential formalism. Numerous test verify that, for the order N=2 or 3, the method gives precision equivalent to that of the multigroup numerical integration for the spectra with greatly reduced calculational effort. Furthermore, since the Qsub(N) expressions are a kind of synthesis method, they allow calculation of the spatial Green's function, or the use of collision probabilities to find the flux. Both possibilities have been introduced into existing reactor codes: EXCALIBUR, TRALOR, RE MINEUR... Some applications to multi-zone media (core, blanket, reflector of Masurca pile and exponential slabs) are presented in the isotropic collision approximation. The case of linearly anisotropic collisions is theoretically resolved [fr

  11. A binomial modeling approach for upscaling colloid transport under unfavorable conditions: organic prediction of extended tailing

    Science.gov (United States)

    Hilpert, Markus; Rasmuson, Anna; Johnson, William

    2017-04-01

    Transport of colloids in saturated porous media is significantly influenced by colloidal interactions with grain surfaces. Near-surface fluid domain colloids experience relatively low fluid drag and relatively strong colloidal forces that slow their down-gradient translation relative to colloids in bulk fluid. Near surface fluid domain colloids may re-enter into the bulk fluid via diffusion (nanoparticles) or expulsion at rear flow stagnation zones, they may immobilize (attach) via strong primary minimum interactions, or they may move along a grain-to-grain contact to the near surface fluid domain of an adjacent grain. We introduce a simple model that accounts for all possible permutations of mass transfer within a dual pore and grain network. The primary phenomena thereby represented in the model are mass transfer of colloids between the bulk and near-surface fluid domains and immobilization onto grain surfaces. Colloid movement is described by a sequence of trials in a series of unit cells, and the binomial distribution is used to calculate the probabilities of each possible sequence. Pore-scale simulations provide mechanistically-determined likelihoods and timescales associated with the above pore-scale colloid mass transfer processes, whereas the network-scale model employs pore and grain topology to determine probabilities of transfer from up-gradient bulk and near-surface fluid domains to down-gradient bulk and near-surface fluid domains. Inter-grain transport of colloids in the near surface fluid domain can cause extended tailing.

  12. The Multi-Scale Model Approach to Thermohydrology at Yucca Mountain

    International Nuclear Information System (INIS)

    Glascoe, L; Buscheck, T A; Gansemer, J; Sun, Y

    2002-01-01

    The Multi-Scale Thermo-Hydrologic (MSTH) process model is a modeling abstraction of them1 hydrology (TH) of the potential Yucca Mountain repository at multiple spatial scales. The MSTH model as described herein was used for the Supplemental Science and Performance Analyses (BSC, 2001) and is documented in detail in CRWMS M and O (2000) and Glascoe et al. (2002). The model has been validated to a nested grid model in Buscheck et al. (In Review). The MSTH approach is necessary for modeling thermal hydrology at Yucca Mountain for two reasons: (1) varying levels of detail are necessary at different spatial scales to capture important TH processes and (2) a fully-coupled TH model of the repository which includes the necessary spatial detail is computationally prohibitive. The MSTH model consists of six ''submodels'' which are combined in a manner to reduce the complexity of modeling where appropriate. The coupling of these models allows for appropriate consideration of mountain-scale thermal hydrology along with the thermal hydrology of drift-scale discrete waste packages of varying heat load. Two stages are involved in the MSTH approach, first, the execution of submodels, and second, the assembly of submodels using the Multi-scale Thermohydrology Abstraction Code (MSTHAC). MSTHAC assembles the submodels in a five-step process culminating in the TH model output of discrete waste packages including a mountain-scale influence

  13. Scaled Experimental Modeling of VHTR Plenum Flows

    Energy Technology Data Exchange (ETDEWEB)

    ICONE 15

    2007-04-01

    Abstract The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. Various scaled heated gas and water flow facilities were investigated for modeling VHTR upper and lower plenum flows during the decay heat portion of a pressurized conduction-cooldown scenario and for modeling thermal mixing and stratification (“thermal striping”) in the lower plenum during normal operation. It was concluded, based on phenomena scaling and instrumentation and other practical considerations, that a heated water flow scale model facility is preferable to a heated gas flow facility and to unheated facilities which use fluids with ranges of density to simulate the density effect of heating. For a heated water flow lower plenum model, both the Richardson numbers and Reynolds numbers may be approximately matched for conduction-cooldown natural circulation conditions. Thermal mixing during normal operation may be simulated but at lower, but still fully turbulent, Reynolds numbers than in the prototype. Natural circulation flows in the upper plenum may also be simulated in a separate heated water flow facility that uses the same plumbing as the lower plenum model. However, Reynolds number scaling distortions will occur at matching Richardson numbers due primarily to the necessity of using a reduced number of channels connected to the plenum than in the prototype (which has approximately 11,000 core channels connected to the upper plenum) in an otherwise geometrically scaled model. Experiments conducted in either or both facilities will meet the objectives of providing benchmark data for the validation of codes proposed for NGNP designs and safety studies, as well as providing a better understanding of the complex flow phenomena in the plenums.

  14. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    Science.gov (United States)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    , increases weathering and erosion around the headland, and eventually changes the headland into an embayment! Improvements to our modeling approach include refining the initial conditions. To create a fractal, immature rocky coastline, self-similar river networks with random side branches were drawn on the shoreline domain. River networks and side branches were scaled according to Horton's law and Tokunaga statistics, respectively, and each river pathway was assigned a simple exponential longitudinal profile. Topography was generated around the river networks to create drainage basins and, on a larger scale, represent a mountainous, fluvially-sculpted landscape. The resultant morphology was then flooded to a given elevation, leaving a fractal rocky coastline. In addition to the simulated terrain, actual digital elevation models will also be used to derive the initial conditions. Elevation data from different mountainous geomorphic settings such as the decaying Appalachian Mountains or actively uplifting Sierra Nevada can be effectively flooded to a given sea level, resulting in a fractal and immature coastline that can be input to the numerical model. This approach will offer insight into how rocky coastlines in different geomorphic settings evolve, and provide a useful complement to results using the simulated terrain.

  15. A dual theory of price and value in a meso-scale economic model with stochastic profit rate

    Science.gov (United States)

    Greenblatt, R. E.

    2014-12-01

    The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.

  16. Modelling pollutant deposition to vegetation: scaling down from the canopy to the biochemical level

    International Nuclear Information System (INIS)

    Taylor, G.E. Jr.; Constable, J.V.H.

    1994-01-01

    In the atmosphere, pollutants exist in either the gas, particle or liquid (rain and cloud water) phase. The most important gas-phase pollutants from a biological or ecological perspective are oxides of nitrogen (nitrogen dioxide, nitric acid vapor), oxides of sulfur (sulfur dioxide), ammonia, tropospheric ozone and mercury vapor. For liquid or particle phase pollutants, the suite of pollutants is varied and includes hydrogen ion, multiple heavy metals, and select anions. For many of these pollutants, plant canopies are a major sink within continental landscapes, and deposition is highly dependent on the (i) physical form or phase of the pollutant, (ii) meteorological conditions above and within the plant canopy, and (iii) physiological or biochemical properties of the leaf, both on the leaf surface and within the leaf interior. In large measure, the physical and chemical processes controlling deposition at the meteorological and whole-canopy levels are well characterized and have been mathematically modelled. In contrast, the processes operating on the leaf surface and within the leaf interior are not well understood and are largely specific for individual pollutants. The availability of process-level models to estimate deposition is discussed briefly at the canopy and leaf level; however, the majority of effort is devoted to modelling deposition at the leaf surface and leaf interior using the two-layer stagnant film model. This model places a premium on information of a physiological and biochemical nature, and highlights the need to distinguish clearly between the measurements of atmospheric chemistry and the physiologically effective exposure since the two may be very dissimilar. A case study of deposition in the Los Angeles Basin is used to demonstrate the modelling approach, to present the concept of exposure dynamics in the atmosphere versus that in the leaf interior, and to document the principle that most forest canopies are exposed to multiple chemical

  17. Actual Leisure Participation of Norwegian Adolescents with Down Syndrome

    Science.gov (United States)

    Dolva, Anne-Stine; Kleiven, Jo; Kollstad, Marit

    2014-01-01

    This article reports the actual participation in leisure activities by a sample of Norwegian adolescents with Down syndrome aged 14. Representing a first generation to grow up in a relatively inclusive context, they live with their families, attend mainstream schools, and are part of common community life. Leisure information was obtained in…

  18. Biology meets Physics: Reductionism and Multi-scale Modeling of Morphogenesis

    DEFF Research Database (Denmark)

    Green, Sara; Batterman, Robert

    2017-01-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism ...... modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent....... from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom......-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the “tyranny of scales” problem present a challenge to reductive explanations in both physics and biology. The problem refers to the scale...

  19. A two-step combination of top-down and bottom-up fire emission estimates at regional and global scales: strengths and main uncertainties

    Science.gov (United States)

    Sofiev, Mikhail; Soares, Joana; Kouznetsov, Rostislav; Vira, Julius; Prank, Marje

    2016-04-01

    Top-down emission estimation via inverse dispersion modelling is used for various problems, where bottom-up approaches are difficult or highly uncertain. One of such areas is the estimation of emission from wild-land fires. In combination with dispersion modelling, satellite and/or in-situ observations can, in principle, be used to efficiently constrain the emission values. This is the main strength of the approach: the a-priori values of the emission factors (based on laboratory studies) are refined for real-life situations using the inverse-modelling technique. However, the approach also has major uncertainties, which are illustrated here with a few examples of the Integrated System for wild-land Fires (IS4FIRES). IS4FIRES generates the smoke emission and injection profile from MODIS and SEVIRI active-fire radiative energy observations. The emission calculation includes two steps: (i) initial top-down calibration of emission factors via inverse dispersion problem solution that is made once using training dataset from the past, (ii) application of the obtained emission coefficients to individual-fire radiative energy observations, thus leading to bottom-up emission compilation. For such a procedure, the major classes of uncertainties include: (i) imperfect information on fires, (ii) simplifications in the fire description, (iii) inaccuracies in the smoke observations and modelling, (iv) inaccuracies of the inverse problem solution. Using examples of the fire seasons 2010 in Russia, 2012 in Eurasia, 2007 in Australia, etc, it is pointed out that the top-down system calibration performed for a limited number of comparatively moderate cases (often the best-observed ones) may lead to errors in application to extreme events. For instance, the total emission of 2010 Russian fires is likely to be over-estimated by up to 50% if the calibration is based on the season 2006 and fire description is simplified. Longer calibration period and more sophisticated parameterization

  20. Scale interactions on diurnal toseasonal timescales and their relevanceto model systematic errors

    Directory of Open Access Journals (Sweden)

    G. Yang

    2003-06-01

    Full Text Available Examples of current research into systematic errors in climate models are used to demonstrate the importance of scale interactions on diurnal,intraseasonal and seasonal timescales for the mean and variability of the tropical climate system. It has enabled some conclusions to be drawn about possible processes that may need to be represented, and some recommendations to be made regarding model improvements. It has been shown that the Maritime Continent heat source is a major driver of the global circulation but yet is poorly represented in GCMs. A new climatology of the diurnal cycle has been used to provide compelling evidence of important land-sea breeze and gravity wave effects, which may play a crucial role in the heat and moisture budget of this key region for the tropical and global circulation. The role of the diurnal cycle has also been emphasized for intraseasonal variability associated with the Madden Julian Oscillation (MJO. It is suggested that the diurnal cycle in Sea Surface Temperature (SST during the suppressed phase of the MJO leads to a triggering of cumulus congestus clouds, which serve to moisten the free troposphere and hence precondition the atmosphere for the next active phase. It has been further shown that coupling between the ocean and atmosphere on intraseasonal timescales leads to a more realistic simulation of the MJO. These results stress the need for models to be able to simulate firstly, the observed tri-modal distribution of convection, and secondly, the coupling between the ocean and atmosphere on diurnal to intraseasonal timescales. It is argued, however, that the current representation of the ocean mixed layer in coupled models is not adequate to represent the complex structure of the observed mixed layer, in particular the formation of salinity barrier layers which can potentially provide much stronger local coupling between the atmosphere and ocean on diurnal to intraseasonal timescales.

  1. Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics

    Science.gov (United States)

    Saeedi, Sara

    2018-06-01

    With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving

  2. Scale up, then power down

    International Nuclear Information System (INIS)

    Pichon, Max

    2011-01-01

    Full text: The University of Queensland has switched on what it says is Australia's largest solar photovoltaic installation, a 1.2MW system that spans 11 rooftops at the St Lucia campus. The UQ Solar Array, which effectively coats four buildings with more than 5,000 polycrystalline silicon solar panels, will generate about 1,850MWh a year. “During the day, the system will provide up to six per cent of the university's power requirements, reducing greenhouse gas emissions by approximately 1,650 tonnes of CO 2 -e per annum,”said Rodger Whitby, the GM of generation for renewables company Ingenero. It also underpins a number of cutting-edge research projects in diverse fields, according to Professor Paul Meredith, who oversaw the design and installation of the solar array. “A major objective of our array research program is to provide a clearer understanding of how to integrate megawatt- scale renewable energy sources into an urban grid,” said Professor Meredith, of the School of Mathematics and Physics and Global Change Institute. “Mid-size, commercial-scale renewable power generating systems like UQ's will become increasingly common in urban and remote areas. Addressing the engineering issues around how these systems can feed into and integrate with the grid is essential so that people can really understand and calculate their value as we transition to lower-emission forms of energy.” Electricity retailer Energex contributed $90,000 to the research project through state-of-the- art equipment to allow high-quality monitoring and analysis of the power feed. Another key research project addresses one of the most common criticisms of solar power: that it cannot replace baseload grid power. Through a partnership with Brisbane electricity storage technology company RedFlow, a 200kW battery bank will be connected to a 339kW section of the solar array. “The RedFlow system uses next-generation zinc bromine batteries,” Professor Meredith said.

  3. Statistical Examination of the Resolution of a Block-Scale Urban Drainage Model

    Science.gov (United States)

    Goldstein, A.; Montalto, F. A.; Digiovanni, K. A.

    2009-12-01

    Stormwater drainage models are utilized by cities in order to plan retention systems to prevent combined sewage overflows and design for development. These models aggregate subcatchments and ignore small pipelines providing a coarse representation of a sewage network. This study evaluates the importance of resolution by comparing two models developed on a neighborhood scale for predicting the total quantity and peak flow of runoff to observed runoff measured at the site. The low and high resolution models were designed for a 2.6 ha block in Bronx, NYC in EPA Stormwater Management Model (SWMM) using a single catchment and separate subcatchments based on surface cover, respectively. The surface covers represented included sidewalks, street, buildings, and backyards. Characteristics for physical surfaces and the infrastructure in the high resolution mode were determined from site visits, sewer pipe maps, aerial photographs, and GIS data-sets provided by the NYC Department of City Planning. Since the low resolution model was depicted at a coarser scale, generalizations were assumed about the overall average characteristics of the catchment. Rainfall and runoff data were monitored over a four month period during the summer rainy season. A total of 53 rain fall events were recorded but only 29 storms produced significant amount of runoffs to be evaluated in the simulations. To determine which model was more accurate at predicting the observed runoff, three characteristics for each storm were compared: peak runoff, total runoff, and time to peak. Two statistical tests were used to determine the significance of the results: the percent difference for each storm and the overall Chi-squared Goodness of Fit distribution for both the low and high resolution model. These tests will evaluate if there is a statistical difference depending on the resolution of scale of the stormwater model. The scale of representation is being evaluated because it could have a profound impact on

  4. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  5. Age and Pattern of Intellectual Decline among Down Syndrome and Other Mentally Retarded Adults.

    Science.gov (United States)

    Gibson, David; And Others

    1988-01-01

    A study of 18 Down Syndrome and 18 other mentally retarded adults found evidence of a significant erosion of Wechsler Intelligence Scale for Children scores from the third to fourth decades of life. The Block Design subtest was especially vulnerable to performance decline with age in the Down Syndrome adults. (Author/JDD)

  6. Properties of Brownian Image Models in Scale-Space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup

    2003-01-01

    Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...

  7. Spatiotemporal exploratory models for broad-scale survey data.

    Science.gov (United States)

    Fink, Daniel; Hochachka, Wesley M; Zuckerberg, Benjamin; Winkler, David W; Shaby, Ben; Munson, M Arthur; Hooker, Giles; Riedewald, Mirek; Sheldon, Daniel; Kelling, Steve

    2010-12-01

    The distributions of animal populations change and evolve through time. Migratory species exploit different habitats at different times of the year. Biotic and abiotic features that determine where a species lives vary due to natural and anthropogenic factors. This spatiotemporal variation needs to be accounted for in any modeling of species' distributions. In this paper we introduce a semiparametric model that provides a flexible framework for analyzing dynamic patterns of species occurrence and abundance from broad-scale survey data. The spatiotemporal exploratory model (STEM) adds essential spatiotemporal structure to existing techniques for developing species distribution models through a simple parametric structure without requiring a detailed understanding of the underlying dynamic processes. STEMs use a multi-scale strategy to differentiate between local and global-scale spatiotemporal structure. A user-specified species distribution model accounts for spatial and temporal patterning at the local level. These local patterns are then allowed to "scale up" via ensemble averaging to larger scales. This makes STEMs especially well suited for exploring distributional dynamics arising from a variety of processes. Using data from eBird, an online citizen science bird-monitoring project, we demonstrate that monthly changes in distribution of a migratory species, the Tree Swallow (Tachycineta bicolor), can be more accurately described with a STEM than a conventional bagged decision tree model in which spatiotemporal structure has not been imposed. We also demonstrate that there is no loss of model predictive power when a STEM is used to describe a spatiotemporal distribution with very little spatiotemporal variation; the distribution of a nonmigratory species, the Northern Cardinal (Cardinalis cardinalis).

  8. On the potential of models for location and scale for genome-wide DNA methylation data.

    Science.gov (United States)

    Wahl, Simone; Fenske, Nora; Zeilinger, Sonja; Suhre, Karsten; Gieger, Christian; Waldenberger, Melanie; Grallert, Harald; Schmid, Matthias

    2014-07-03

    With the help of epigenome-wide association studies (EWAS), increasing knowledge on the role of epigenetic mechanisms such as DNA methylation in disease processes is obtained. In addition, EWAS aid the understanding of behavioral and environmental effects on DNA methylation. In terms of statistical analysis, specific challenges arise from the characteristics of methylation data. First, methylation β-values represent proportions with skewed and heteroscedastic distributions. Thus, traditional modeling strategies assuming a normally distributed response might not be appropriate. Second, recent evidence suggests that not only mean differences but also variability in site-specific DNA methylation associates with diseases, including cancer. The purpose of this study was to compare different modeling strategies for methylation data in terms of model performance and performance of downstream hypothesis tests. Specifically, we used the generalized additive models for location, scale and shape (GAMLSS) framework to compare beta regression with Gaussian regression on raw, binary logit and arcsine square root transformed methylation data, with and without modeling a covariate effect on the scale parameter. Using simulated and real data from a large population-based study and an independent sample of cancer patients and healthy controls, we show that beta regression does not outperform competing strategies in terms of model performance. In addition, Gaussian models for location and scale showed an improved performance as compared to models for location only. The best performance was observed for the Gaussian model on binary logit transformed β-values, referred to as M-values. Our results further suggest that models for location and scale are specifically sensitive towards violations of the distribution assumption and towards outliers in the methylation data. Therefore, a resampling procedure is proposed as a mode of inference and shown to diminish type I error rate in

  9. Developments in regional scale simulation: modelling ecologically sustainable development in the Northern Territory

    International Nuclear Information System (INIS)

    Moffatt, I.

    1992-01-01

    This paper outlines one way in which researchers can make a positive methodological contribution to the debate on ecologically sustainable development (ESD) by integrating dynamic modelling and geographical information systems to form the basis for regional scale simulations. Some of the orthodox uses of Geographic Information System (GIS) are described and it is argued that most applications do not incorporate process based causal models. A description of a pilot study into developing a processed base model of ESD in the Northern Territory is given. This dynamic process based simulation model consists of two regions namely the 'Top End' and the 'Central' district. Each region consists of ten sub-sectors and the pattern of land use represents a common sector to both regions. The role of environmental defence expenditure, including environmental rehabilitation of uranium mines, in the model is noted. Similarly, it is hypothesized that the impact of exogenous changes such as the greenhouse effect and global economic fluctuations can have a differential impact on the behaviour of several sectors of the model. Some of the problems associated with calibrating and testing the model are reviewed. Finally, it is suggested that further refinement of this model can be achieved with the pooling of data sets and the development of PC based transputers for more detailed and accurate regional scale simulations. When fully developed it is anticipated that this pilot model can be of service to environmental managers and other groups involved in promoting ESD in the Northern Territory. 54 refs., 6 figs

  10. Using of Video Modeling in Teaching a Simple Meal Preparation Skill for Pupils of Down Syndrome

    Science.gov (United States)

    AL-Salahat, Mohammad Mousa

    2016-01-01

    The current study aimed to identify the impact of video modeling upon teaching three pupils with Down syndrome the skill of preparing a simple meal (sandwich), where the training was conducted in a separate classroom in schools of normal students. The training consisted of (i) watching the video of an intellectually disabled pupil, who is…

  11. Logarithmic corrections to scaling in the XY2-model

    International Nuclear Information System (INIS)

    Kenna, R.; Irving, A.C.

    1995-01-01

    We study the distribution of partition function zeroes for the XY-model in two dimensions. In particular we find the scaling behaviour of the end of the distribution of zeroes in the complex external magnetic field plane in the thermodynamic limit (the Yang-Lee edge) and the form for the density of these zeroes. Assuming that finite-size scaling holds, we show that there have to exist logarithmic corrections to the leading scaling behaviour of thermodynamic quantities in this model. These logarithmic corrections are also manifest in the finite-size scaling formulae and we identify them numerically. The method presented here can be used to check the compatibility of scaling behaviour of odd and even thermodynamic functions in other models too. ((orig.))

  12. Multi-scale Modeling of Plasticity in Tantalum.

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weinberger, Christopher [Drexel Univ., Philadelphia, PA (United States)

    2015-12-01

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct

  13. The use of TOUGH2 for the LBL/USGS 3-dimensional site-scale model of Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Bodvarsson, G.; Chen, G.; Haukwa, C.; Kwicklis, E.

    1995-01-01

    The three-dimensional site-scale numerical model o the unsaturated zone at Yucca Mountain is under continuous development and calibration through a collaborative effort between Lawrence Berkeley Laboratory (LBL) and the United States Geological Survey (USGS). The site-scale model covers an area of about 30 km 2 and is bounded by major fault zones to the west (Solitario Canyon Fault), east (Bow Ridge Fault) and perhaps to the north by an unconfirmed fault (Yucca Wash Fault). The model consists of about 5,000 grid blocks (elements) with nearly 20,000 connections between them; the grid was designed to represent the most prevalent geological and hydro-geological features of the site including major faults, and layering and bedding of the hydro-geological units. Submodels are used to investigate specific hypotheses and their importance before incorporation into the three-dimensional site-scale model. The primary objectives of the three-dimensional site-scale model are to: (1) quantify moisture, gas and heat flows in the ambient conditions at Yucca Mountain, (2) help in guiding the site-characterization effort (primarily by USGS) in terms of additional data needs and to identify regions of the mountain where sufficient data have been collected, and (3) provide a reliable model of Yucca Mountain that is validated by repeated predictions of conditions in new boreboles and the ESF and has therefore the confidence of the public and scientific community. The computer code TOUGH2 developed by K. Pruess at LBL was used along with the three-dimensional site-scale model to generate these results. In this paper, we also describe the three-dimensional site-scale model emphasizing the numerical grid development, and then show some results in terms of moisture, gas and heat flow

  14. A regional-scale, high resolution dynamical malaria model that accounts for population density, climate and surface hydrology.

    Science.gov (United States)

    Tompkins, Adrian M; Ermert, Volker

    2013-02-18

    The relative roles of climate variability and population related effects in malaria transmission could be better understood if regional-scale dynamical malaria models could account for these factors. A new dynamical community malaria model is introduced that accounts for the temperature and rainfall influences on the parasite and vector life cycles which are finely resolved in order to correctly represent the delay between the rains and the malaria season. The rainfall drives a simple but physically based representation of the surface hydrology. The model accounts for the population density in the calculation of daily biting rates. Model simulations of entomological inoculation rate and circumsporozoite protein rate compare well to data from field studies from a wide range of locations in West Africa that encompass both seasonal endemic and epidemic fringe areas. A focus on Bobo-Dioulasso shows the ability of the model to represent the differences in transmission rates between rural and peri-urban areas in addition to the seasonality of malaria. Fine spatial resolution regional integrations for Eastern Africa reproduce the malaria atlas project (MAP) spatial distribution of the parasite ratio, and integrations for West and Eastern Africa show that the model grossly reproduces the reduction in parasite ratio as a function of population density observed in a large number of field surveys, although it underestimates malaria prevalence at high densities probably due to the neglect of population migration. A new dynamical community malaria model is publicly available that accounts for climate and population density to simulate malaria transmission on a regional scale. The model structure facilitates future development to incorporate migration, immunity and interventions.

  15. Continuously distributed magnetization profile for millimeter-scale elastomeric undulatory swimming

    Science.gov (United States)

    Diller, Eric; Zhuang, Jiang; Zhan Lum, Guo; Edwards, Matthew R.; Sitti, Metin

    2014-04-01

    We have developed a millimeter-scale magnetically driven swimming robot for untethered motion at mid to low Reynolds numbers. The robot is propelled by continuous undulatory deformation, which is enabled by the distributed magnetization profile of a flexible sheet. We demonstrate control of a prototype device and measure deformation and speed as a function of magnetic field strength and frequency. Experimental results are compared with simple magnetoelastic and fluid propulsion models. The presented mechanism provides an efficient remote actuation method at the millimeter scale that may be suitable for further scaling down in size for micro-robotics applications in biotechnology and healthcare.

  16. An ultra-fine group slowing down benchmark

    International Nuclear Information System (INIS)

    Ganapol, B. D.; Maldonado, G. I.; Williams, M. L.

    2009-01-01

    We suggest a new solution to the neutron slowing down equation in terms of multi-energy panels. Our motivation is to establish a computational benchmark featuring an ultra-fine group calculation, where the number of groups could be on the order of 100,000. While the CENTRM code of the SCALE code package has been shown to adequately treat this many groups, there is always a need for additional verification. The multi panel solution principle is simply to consider the slowing down region as sub regions of panels, with each panel a manageable number of groups, say 100. In this way, we reduce the enormity of dealing with the entire spectrum all at once by considering many smaller problems. We demonstrate the solution in the unresolved U3o8 resonance region. (authors)

  17. Probabilistic, meso-scale flood loss modelling

    Science.gov (United States)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  18. Two-dimensional divertor modeling and scaling laws

    International Nuclear Information System (INIS)

    Catto, P.J.; Connor, J.W.; Knoll, D.A.

    1996-01-01

    Two-dimensional numerical models of divertors contain large numbers of dimensionless parameters that must be varied to investigate all operating regimes of interest. To simplify the task and gain insight into divertor operation, we employ similarity techniques to investigate whether model systems of equations plus boundary conditions in the steady state admit scaling transformations that lead to useful divertor similarity scaling laws. A short mean free path neutral-plasma model of the divertor region below the x-point is adopted in which all perpendicular transport is due to the neutrals. We illustrate how the results can be used to benchmark large computer simulations by employing a modified version of UEDGE which contains a neutral fluid model. (orig.)

  19. A Physically-based Model For Rainfall-triggered Landslides At A Regional Scale

    Science.gov (United States)

    Teles, V.; Capolongo, D.; Bras, R. L.

    Rainfall has long been recognized as a major cause of landslides. Historical records have shown that large rainfall can generate hundreds of landslides over hundreds of square kilometers. Although a great body of work has documented the morphology and mechanics of individual slope failure, few studies have considered the process at basin and regional scale. A landslide model is integrated in the landscape evolution model CHILD and simulates rainfall-triggered events based on a geotechnical index, the factor of safety, which takes into account the slope, the soil effective cohesion and weight, the friction angle, the regolith thickness and the saturated thickness. The stat- urated thickness is represented by the wetness index developed in the TOPMODEL. The topography is represented by a Triangulated Irregular Network (TIN). The factor of safety is computed at each node of the TIN. If the factor of safety is lower than 1, a landslide is intiated at this node. The regolith is then moved downstream. We applied the model to the Fortore basin whose valley cuts the flysch terrain that constitute the framework of the so called "sub-Apennines" chain that is the most eastern part of the Southern Apennines (Italy). We will discuss its value according to its sensitivity to the used parameters and compare it to the actual data available for this basin.

  20. Relationship between the climbing up and climbing down stairs domain scores on the FES-DMD, the score on the Vignos Scale, age and timed performance of functional activities in boys with Duchenne muscular dystrophy

    Directory of Open Access Journals (Sweden)

    Lilian A. Y. Fernandes

    2014-12-01

    Full Text Available BACKGROUND: Knowing the potential for and limitations of information generated using different evaluation instruments favors the development of more accurate functional diagnoses and therapeutic decision-making. OBJECTIVE: To investigate the relationship between the number of compensatory movements when climbing up and going down stairs, age, functional classification and time taken to perform a tested activity (TA of going up and down stairs in boys with Duchenne muscular dystrophy (DMD. METHOD: A bank of movies featuring 30 boys with DMD performing functional activities was evaluated. Compensatory movements were assessed using the climbing up and going down stairs domain of the Functional Evaluation Scale for Duchenne Muscular Dystrophy (FES-DMD; age in years; functional classification using the Vignos Scale (VS, and TA using a timer. Statistical analyses were performed using the Spearman correlation test. RESULTS: There is a moderate relationship between the climbing up stairs domain of the FES-DMD and age (r=0.53, p=0.004 and strong relationships with VS (r=0.72, p=0.001 and TA for this task (r=0.83, p<0.001. There were weak relationships between the going down stairs domain of the FES-DMD-going down stairs with age (r=0.40, p=0.032, VS (r=0.65, p=0.002 and TA for this task (r=0.40, p=0.034. CONCLUSION: These findings indicate that the evaluation of compensatory movements used when climbing up stairs can provide more relevant information about the evolution of the disease, although the activity of going down stairs should be investigated, with the aim of enriching guidance and strengthening accident prevention. Data from the FES-DMD, age, VS and TA can be used in a complementary way to formulate functional diagnoses. Longitudinal studies and with broader age groups may supplement this information.