Energy Technology Data Exchange (ETDEWEB)
Yago, K; Endo, H [Ship Research Inst., Tokyo (Japan)
1997-12-31
The hydroelastic response test was carried out in waves using an approximately 10m long large model, and the numerical analysis was done by the direct method, for a commercial-size (300m long) box-shaped floating structure with shallow draft. The scale ratio of the model is 1/30.8, and the minimum wave cycle is around 0.7s from wave-making capacity of the tank, which corresponds to 4 to 14s with the commercial-size structure. Elastic displacement and bending strain were measured. The calculated results by the direct method are in good agreement with the observed results. The fluid dynamic mutual interference effects between elements are weak in added mass but strong in damping force, indicating that the range of mutual interference is strongly related to rolling cycle in the range of mutual interference. Wave pressure on the floating structure bottom is high at the upper side of the wave, greatly damping towards the downside of the wave. However, response amplitude of elastic displacement tends to increase at the ends, both in upside and downside of the wave. For the floating structure studied, the 0 to 4th mode components are predominant in longitudinal waves, and the 6th or higher mode components are negligibly low. 21 refs., 15 figs., 2 tabs.
Energy Technology Data Exchange (ETDEWEB)
Yago, K.; Endo, H. [Ship Research Inst., Tokyo (Japan)
1996-12-31
The hydroelastic response test was carried out in waves using an approximately 10m long large model, and the numerical analysis was done by the direct method, for a commercial-size (300m long) box-shaped floating structure with shallow draft. The scale ratio of the model is 1/30.8, and the minimum wave cycle is around 0.7s from wave-making capacity of the tank, which corresponds to 4 to 14s with the commercial-size structure. Elastic displacement and bending strain were measured. The calculated results by the direct method are in good agreement with the observed results. The fluid dynamic mutual interference effects between elements are weak in added mass but strong in damping force, indicating that the range of mutual interference is strongly related to rolling cycle in the range of mutual interference. Wave pressure on the floating structure bottom is high at the upper side of the wave, greatly damping towards the downside of the wave. However, response amplitude of elastic displacement tends to increase at the ends, both in upside and downside of the wave. For the floating structure studied, the 0 to 4th mode components are predominant in longitudinal waves, and the 6th or higher mode components are negligibly low. 21 refs., 15 figs., 2 tabs.
International Nuclear Information System (INIS)
Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.
1989-01-01
Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs
Small scale models equal large scale savings
International Nuclear Information System (INIS)
Lee, R.; Segroves, R.
1994-01-01
A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)
International Nuclear Information System (INIS)
Rutqvist, J.
2004-01-01
This model report documents the drift scale coupled thermal-hydrological-mechanical (THM) processes model development and presents simulations of the THM behavior in fractured rock close to emplacement drifts. The modeling and analyses are used to evaluate the impact of THM processes on permeability and flow in the near-field of the emplacement drifts. The results from this report are used to assess the importance of THM processes on seepage and support in the model reports ''Seepage Model for PA Including Drift Collapse'' and ''Abstraction of Drift Seepage'', and to support arguments for exclusion of features, events, and processes (FEPs) in the analysis reports ''Features, Events, and Processes in Unsaturated Zone Flow and Transport and Features, Events, and Processes: Disruptive Events''. The total system performance assessment (TSPA) calculations do not use any output from this report. Specifically, the coupled THM process model is applied to simulate the impact of THM processes on hydrologic properties (permeability and capillary strength) and flow in the near-field rock around a heat-releasing emplacement drift. The heat generated by the decay of radioactive waste results in elevated rock temperatures for thousands of years after waste emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, resulting in water redistribution and altered flow paths. These temperatures will also cause thermal expansion of the rock, with the potential of opening or closing fractures and thus changing fracture permeability in the near-field. Understanding the THM coupled processes is important for the performance of the repository because the thermally induced permeability changes potentially effect the magnitude and spatial distribution of percolation flux in the vicinity of the drift, and hence the seepage of water into the drift. This is important because a sufficient amount of water must be available within a
International Symposia on Scale Modeling
Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori
2015-01-01
This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...
Scale modelling in LMFBR safety
International Nuclear Information System (INIS)
Cagliostro, D.J.; Florence, A.L.; Abrahamson, G.R.
1979-01-01
This paper reviews scale modelling techniques used in studying the structural response of LMFBR vessels to HCDA loads. The geometric, material, and dynamic similarity parameters are presented and identified using the methods of dimensional analysis. Complete similarity of the structural response requires that each similarity parameter be the same in the model as in the prototype. The paper then focuses on the methods, limitations, and problems of duplicating these parameters in scale models and mentions an experimental technique for verifying the scaling. Geometric similarity requires that all linear dimensions of the prototype be reduced in proportion to the ratio of a characteristic dimension of the model to that of the prototype. The overall size of the model depends on the structural detail required, the size of instrumentation, and the costs of machining and assemblying the model. Material similarity requires that the ratio of the density, bulk modulus, and constitutive relations for the structure and fluid be the same in the model as in the prototype. A practical choice of a material for the model is one with the same density and stress-strain relationship as the operating temperature. Ni-200 and water are good simulant materials for the 304 SS vessel and the liquid sodium coolant, respectively. Scaling of the strain rate sensitivity and fracture toughness of materials is very difficult, but may not be required if these effects do not influence the structural response of the reactor components. Dynamic similarity requires that the characteristic pressure of a simulant source equal that of the prototype HCDA for geometrically similar volume changes. The energy source is calibrated in the geometry and environment in which it will be used to assure that heat transfer between high temperature loading sources and the coolant simulant and that non-equilibrium effects in two-phase sources are accounted for. For the geometry and flow conitions of interest, the
Global scale groundwater flow model
Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc
2013-04-01
As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.
Holographic models with anisotropic scaling
Brynjolfsson, E. J.; Danielsson, U. H.; Thorlacius, L.; Zingg, T.
2013-12-01
We consider gravity duals to d+1 dimensional quantum critical points with anisotropic scaling. The primary motivation comes from strongly correlated electron systems in condensed matter theory but the main focus of the present paper is on the gravity models in their own right. Physics at finite temperature and fixed charge density is described in terms of charged black branes. Some exact solutions are known and can be used to obtain a maximally extended spacetime geometry, which has a null curvature singularity inside a single non-degenerate horizon, but generic black brane solutions in the model can only be obtained numerically. Charged matter gives rise to black branes with hair that are dual to the superconducting phase of a holographic superconductor. Our numerical results indicate that holographic superconductors with anisotropic scaling have vanishing zero temperature entropy when the back reaction of the hair on the brane geometry is taken into account.
A multi scale model for small scale plasticity
International Nuclear Information System (INIS)
Zbib, Hussein M.
2002-01-01
Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band
Scaling laws for modeling nuclear reactor systems
International Nuclear Information System (INIS)
Nahavandi, A.N.; Castellana, F.S.; Moradkhanian, E.N.
1979-01-01
Scale models are used to predict the behavior of nuclear reactor systems during normal and abnormal operation as well as under accident conditions. Three types of scaling procedures are considered: time-reducing, time-preserving volumetric, and time-preserving idealized model/prototype. The necessary relations between the model and the full-scale unit are developed for each scaling type. Based on these relationships, it is shown that scaling procedures can lead to distortion in certain areas that are discussed. It is advised that, depending on the specific unit to be scaled, a suitable procedure be chosen to minimize model-prototype distortion
Spatial scale separation in regional climate modelling
Energy Technology Data Exchange (ETDEWEB)
Feser, F.
2005-07-01
In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter
Modeling and simulation with operator scaling
Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan
2010-01-01
Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...
Modelling of rate effects at multiple scales
DEFF Research Database (Denmark)
Pedersen, R.R.; Simone, A.; Sluys, L. J.
2008-01-01
, the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...
Large Scale Computations in Air Pollution Modelling
DEFF Research Database (Denmark)
Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
One-scale supersymmetric inflationary models
International Nuclear Information System (INIS)
Bertolami, O.; Ross, G.G.
1986-01-01
The reheating phase is studied in a class of supergravity inflationary models involving a two-component hidden sector in which the scale of supersymmetry breaking and the scale generating inflation are related. It is shown that these models have an ''entropy crisis'' in which there is a large entropy release after nucleosynthesis leading to unacceptable low nuclear abundances. (orig.)
Multi-scale modeling of composites
DEFF Research Database (Denmark)
Azizi, Reza
A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....
On scaling of human body models
Directory of Open Access Journals (Sweden)
Hynčík L.
2007-10-01
Full Text Available Human body is not an unique being, everyone is another from the point of view of anthropometry and mechanical characteristics which means that division of the human body population to categories like 5%-tile, 50%-tile and 95%-tile from the application point of view is not enough. On the other hand, the development of a particular human body model for all of us is not possible. That is why scaling and morphing algorithms has started to be developed. The current work describes the development of a tool for scaling of the human models. The idea is to have one (or couple of standard model(s as a base and to create other models based on these basic models. One has to choose adequate anthropometrical and biomechanical parameters that describe given group of humans to be scaled and morphed among.
Multi-scale Modeling of Arctic Clouds
Hillman, B. R.; Roesler, E. L.; Dexheimer, D.
2017-12-01
The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.
Site-Scale Saturated Zone Flow Model
International Nuclear Information System (INIS)
G. Zyvoloski
2003-01-01
The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca Mountain using FEHM V 2.20 are being
Design of scaled down structural models
Simitses, George J.
1994-07-01
In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.
Comments on intermediate-scale models
Energy Technology Data Exchange (ETDEWEB)
Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.
1987-04-23
Some superstring-inspired models employ intermediate scales m/sub I/ of gauge symmetry breaking. Such scales should exceed 10/sup 16/ GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m/sub I/. However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m/sub W/), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m/sub I/. We also comment on the danger of baryon and lepton number violation in the effective low-energy theory.
Comments on intermediate-scale models
International Nuclear Information System (INIS)
Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.
1987-01-01
Some superstring-inspired models employ intermediate scales m I of gauge symmetry breaking. Such scales should exceed 10 16 GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m I . However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m W ), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m I . We also comment on the danger of baryon and lepton number violation in the effective low-energy theory. (orig.)
Managing large-scale models: DBS
International Nuclear Information System (INIS)
1981-05-01
A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases
Scaled Experimental Modeling of VHTR Plenum Flows
Energy Technology Data Exchange (ETDEWEB)
ICONE 15
2007-04-01
Abstract The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. Various scaled heated gas and water flow facilities were investigated for modeling VHTR upper and lower plenum flows during the decay heat portion of a pressurized conduction-cooldown scenario and for modeling thermal mixing and stratification (“thermal striping”) in the lower plenum during normal operation. It was concluded, based on phenomena scaling and instrumentation and other practical considerations, that a heated water flow scale model facility is preferable to a heated gas flow facility and to unheated facilities which use fluids with ranges of density to simulate the density effect of heating. For a heated water flow lower plenum model, both the Richardson numbers and Reynolds numbers may be approximately matched for conduction-cooldown natural circulation conditions. Thermal mixing during normal operation may be simulated but at lower, but still fully turbulent, Reynolds numbers than in the prototype. Natural circulation flows in the upper plenum may also be simulated in a separate heated water flow facility that uses the same plumbing as the lower plenum model. However, Reynolds number scaling distortions will occur at matching Richardson numbers due primarily to the necessity of using a reduced number of channels connected to the plenum than in the prototype (which has approximately 11,000 core channels connected to the upper plenum) in an otherwise geometrically scaled model. Experiments conducted in either or both facilities will meet the objectives of providing benchmark data for the validation of codes proposed for NGNP designs and safety studies, as well as providing a better understanding of the complex flow phenomena in the plenums.
Biointerface dynamics--Multi scale modeling considerations.
Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko
2015-08-01
Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.
Complex scaling in the cluster model
International Nuclear Information System (INIS)
Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.
1987-01-01
To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs
Geometrical scaling vs factorizable eikonal models
Kiang, D
1975-01-01
Among various theoretical explanations or interpretations for the experimental data on the differential cross-sections of elastic proton-proton scattering at CERN ISR, the following two seem to be most remarkable: A) the excellent agreement of the Chou-Yang model prediction of d sigma /dt with data at square root s=53 GeV, B) the general manifestation of geometrical scaling (GS). The paper confronts GS with eikonal models with factorizable opaqueness, with special emphasis on the Chou-Yang model. (12 refs).
Probabilistic, meso-scale flood loss modelling
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2016-04-01
Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.
Energy Technology Data Exchange (ETDEWEB)
C.R. Bryan
2005-02-17
The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral
International Nuclear Information System (INIS)
C.R. Bryan
2005-01-01
The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC submodel uses a drift-scale
Scale Model Thruster Acoustic Measurement Results
Vargas, Magda; Kenny, R. Jeremy
2013-01-01
The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.
1/3-scale model testing program
International Nuclear Information System (INIS)
Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.
1989-01-01
This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system
Genome scale metabolic modeling of cancer
DEFF Research Database (Denmark)
Nilsson, Avlant; Nielsen, Jens
2017-01-01
of metabolism which allows simulation and hypotheses testing of metabolic strategies. It has successfully been applied to many microorganisms and is now used to study cancer metabolism. Generic models of human metabolism have been reconstructed based on the existence of metabolic genes in the human genome......Cancer cells reprogram metabolism to support rapid proliferation and survival. Energy metabolism is particularly important for growth and genes encoding enzymes involved in energy metabolism are frequently altered in cancer cells. A genome scale metabolic model (GEM) is a mathematical formalization...
Large-scale multimedia modeling applications
International Nuclear Information System (INIS)
Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.
1995-08-01
Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications
Aerosol numerical modelling at local scale
International Nuclear Information System (INIS)
Albriet, Bastien
2007-01-01
At local scale and in urban areas, an important part of particulate pollution is due to traffic. It contributes largely to the high number concentrations observed. Two aerosol sources are mainly linked to traffic. Primary emission of soot particles and secondary nanoparticle formation by nucleation. The emissions and mechanisms leading to the formation of such bimodal distribution are still badly understood nowadays. In this thesis, we try to provide an answer to this problematic by numerical modelling. The Modal Aerosol Model MAM is used, coupled with two 3D-codes: a CFD (Mercure Saturne) and a CTM (Polair3D). A sensitivity analysis is performed, at the border of a road but also in the first meters of an exhaust plume, to identify the role of each process involved and the sensitivity of different parameters used in the modelling. (author) [fr
Multi-scale Modelling of Segmentation
DEFF Research Database (Denmark)
Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri
2016-01-01
pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...
Molecular scale modeling of polymer imprint nanolithography.
Chandross, Michael; Grest, Gary S
2012-01-10
We present the results of large-scale molecular dynamics simulations of two different nanolithographic processes, step-flash imprint lithography (SFIL), and hot embossing. We insert rigid stamps into an entangled bead-spring polymer melt above the glass transition temperature. After equilibration, the polymer is then hardened in one of two ways, depending on the specific process to be modeled. For SFIL, we cross-link the polymer chains by introducing bonds between neighboring beads. To model hot embossing, we instead cool the melt to below the glass transition temperature. We then study the ability of these methods to retain features by removing the stamps, both with a zero-stress removal process in which stamp atoms are instantaneously deleted from the system as well as a more physical process in which the stamp is pulled from the hardened polymer at fixed velocity. We find that it is necessary to coat the stamp with an antifriction coating to achieve clean removal of the stamp. We further find that a high density of cross-links is necessary for good feature retention in the SFIL process. The hot embossing process results in good feature retention at all length scales studied as long as coated, low surface energy stamps are used.
International Nuclear Information System (INIS)
Liu Lianshou; Zhang Yang; Wu Yuanfang
1996-01-01
The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)
A high resolution global scale groundwater model
de Graaf, Inge; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc
2014-05-01
As the world's largest accessible source of freshwater, groundwater plays a vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater storage provides a large natural buffer against water shortage and sustains flows to rivers and wetlands, supporting ecosystem habitats and biodiversity. Yet, the current generation of global scale hydrological models (GHMs) do not include a groundwater flow component, although it is a crucial part of the hydrological cycle. Thus, a realistic physical representation of the groundwater system that allows for the simulation of groundwater head dynamics and lateral flows is essential for GHMs that increasingly run at finer resolution. In this study we present a global groundwater model with a resolution of 5 arc-minutes (approximately 10 km at the equator) using MODFLOW (McDonald and Harbaugh, 1988). With this global groundwater model we eventually intend to simulate the changes in the groundwater system over time that result from variations in recharge and abstraction. Aquifer schematization and properties of this groundwater model were developed from available global lithological maps and datasets (Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moosdorf, 2013), combined with our estimate of aquifer thickness for sedimentary basins. We forced the groundwater model with the output from the global hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the net groundwater recharge and average surface water levels derived from routed channel discharge. For the parameterization, we relied entirely on available global datasets and did not calibrate the model so that it can equally be expanded to data poor environments. Based on our sensitivity analysis, in which we run the model with various hydrogeological parameter settings, we observed that most variance in groundwater
Integrated multi-scale modelling and simulation of nuclear fuels
International Nuclear Information System (INIS)
Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.
2015-01-01
This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)
Scaling and constitutive relationships in downcomer modeling
International Nuclear Information System (INIS)
Daly, B.J.; Harlow, F.H.
1978-12-01
Constitutive relationships to describe mass and momentum exchange in multiphase flow in a pressurized water reactor downcomer are presented. Momentum exchange between the phases is described by the product of the flux of momentum available for exchange and the effective area for interaction. The exchange of mass through condensation is assumed to occur along a distinct condensation boundary separating steam at saturation temperature from water in which the temperature falls off roughly linearly with distance from the boundary. Because of the abundance of nucleation sites in a typical churning flow in a downcomer, we propose an equilibrium evaporation process that produces sufficient steam per unit time to keep the water perpetually cooled to the saturation temperature. The transport equations, constitutive models, and boundary conditions used in the K-TIF numerical method are nondimensionalized to obtain scaling relationships for two-phase flow in the downcomer. The results indicate that, subject to idealized thermodynamic and hydraulic constraints, exact mathematical scaling can be achieved. Experiments are proposed to isolate the effects of parameters that contribute to mass, momentum, and energy exchange between the phases
Cavitation erosion - scale effect and model investigations
Geiger, F.; Rutschmann, P.
2015-12-01
The experimental works presented in here contribute to the clarification of erosive effects of hydrodynamic cavitation. Comprehensive cavitation erosion test series were conducted for transient cloud cavitation in the shear layer of prismatic bodies. The erosion pattern and erosion rates were determined with a mineral based volume loss technique and with a metal based pit count system competitively. The results clarified the underlying scale effects and revealed a strong non-linear material dependency, which indicated significantly different damage processes for both material types. Furthermore, the size and dynamics of the cavitation clouds have been assessed by optical detection. The fluctuations of the cloud sizes showed a maximum value for those cavitation numbers related to maximum erosive aggressiveness. The finding suggests the suitability of a model approach which relates the erosion process to cavitation cloud dynamics. An enhanced experimental setup is projected to further clarify these issues.
Comparison Between Overtopping Discharge in Small and Large Scale Models
DEFF Research Database (Denmark)
Helgason, Einar; Burcharth, Hans F.
2006-01-01
The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...
Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications
DEFF Research Database (Denmark)
Liu, Yubao; Warner, Tom; Liu, Yuewei
2011-01-01
This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center f...
Models of Small-Scale Patchiness
McGillicuddy, D. J.
2001-01-01
Patchiness is perhaps the most salient characteristic of plankton populations in the ocean. The scale of this heterogeneity spans many orders of magnitude in its spatial extent, ranging from planetary down to microscale. It has been argued that patchiness plays a fundamental role in the functioning of marine ecosystems, insofar as the mean conditions may not reflect the environment to which organisms are adapted. Understanding the nature of this patchiness is thus one of the major challenges of oceanographic ecology. The patchiness problem is fundamentally one of physical-biological-chemical interactions. This interconnection arises from three basic sources: (1) ocean currents continually redistribute dissolved and suspended constituents by advection; (2) space-time fluctuations in the flows themselves impact biological and chemical processes, and (3) organisms are capable of directed motion through the water. This tripartite linkage poses a difficult challenge to understanding oceanic ecosystems: differentiation between the three sources of variability requires accurate assessment of property distributions in space and time, in addition to detailed knowledge of organismal repertoires and the processes by which ambient conditions control the rates of biological and chemical reactions. Various methods of observing the ocean tend to lie parallel to the axes of the space/time domain in which these physical-biological-chemical interactions take place. Given that a purely observational approach to the patchiness problem is not tractable with finite resources, the coupling of models with observations offers an alternative which provides a context for synthesis of sparse data with articulations of fundamental principles assumed to govern functionality of the system. In a sense, models can be used to fill the gaps in the space/time domain, yielding a framework for exploring the controls on spatially and temporally intermittent processes. The following discussion highlights
Large scale injection test (LASGIT) modelling
International Nuclear Information System (INIS)
Arnedo, D.; Olivella, S.; Alonso, E.E.
2010-01-01
Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug
Modeling of micro-scale thermoacoustics
Energy Technology Data Exchange (ETDEWEB)
Offner, Avshalom [The Nancy and Stephen Grand Technion Energy Program, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Department of Civil and Environmental Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Ramon, Guy Z., E-mail: ramong@technion.ac.il [Department of Civil and Environmental Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel)
2016-05-02
Thermoacoustic phenomena, that is, onset of self-sustained oscillations or time-averaged fluxes in a sound wave, may be harnessed as efficient and robust heat transfer devices. Specifically, miniaturization of such devices holds great promise for cooling of electronics. At the required small dimensions, it is expected that non-negligible slip effects exist at the solid surface of the “stack”-a porous matrix, which is used for maintaining the correct temporal phasing of the heat transfer between the solid and oscillating gas. Here, we develop theoretical models for thermoacoustic engines and heat pumps that account for slip, within the standing-wave approximation. Stability curves for engines with both no-slip and slip boundary conditions were calculated; the slip boundary condition curve exhibits a lower temperature difference compared with the no slip curve for resonance frequencies that characterize micro-scale devices. Maximum achievable temperature differences across the stack of a heat pump were also calculated. For this case, slip conditions are detrimental and such a heat pump would maintain a lower temperature difference compared to larger devices, where slip effects are negligible.
Modeling of micro-scale thermoacoustics
International Nuclear Information System (INIS)
Offner, Avshalom; Ramon, Guy Z.
2016-01-01
Thermoacoustic phenomena, that is, onset of self-sustained oscillations or time-averaged fluxes in a sound wave, may be harnessed as efficient and robust heat transfer devices. Specifically, miniaturization of such devices holds great promise for cooling of electronics. At the required small dimensions, it is expected that non-negligible slip effects exist at the solid surface of the “stack”-a porous matrix, which is used for maintaining the correct temporal phasing of the heat transfer between the solid and oscillating gas. Here, we develop theoretical models for thermoacoustic engines and heat pumps that account for slip, within the standing-wave approximation. Stability curves for engines with both no-slip and slip boundary conditions were calculated; the slip boundary condition curve exhibits a lower temperature difference compared with the no slip curve for resonance frequencies that characterize micro-scale devices. Maximum achievable temperature differences across the stack of a heat pump were also calculated. For this case, slip conditions are detrimental and such a heat pump would maintain a lower temperature difference compared to larger devices, where slip effects are negligible.
Multi-Scale Models for the Scale Interaction of Organized Tropical Convection
Yang, Qiu
Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.
SDG and qualitative trend based model multiple scale validation
Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike
2017-09-01
Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.
Downscaling modelling system for multi-scale air quality forecasting
Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.
2010-09-01
Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -É linear eddy-viscosity model, k - É non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a
Verification of Simulation Results Using Scale Model Flight Test Trajectories
National Research Council Canada - National Science Library
Obermark, Jeff
2004-01-01
.... A second compromise scaling law was investigated as a possible improvement. For ejector-driven events at minimum sideslip, the most important variables for scale model construction are the mass moment of inertia and ejector...
Modelling across bioreactor scales: methods, challenges and limitations
DEFF Research Database (Denmark)
Gernaey, Krist
that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...
Modeling Lactococcus lactis using a genome-scale flux model
Directory of Open Access Journals (Sweden)
Nielsen Jens
2005-06-01
Full Text Available Abstract Background Genome-scale flux models are useful tools to represent and analyze microbial metabolism. In this work we reconstructed the metabolic network of the lactic acid bacteria Lactococcus lactis and developed a genome-scale flux model able to simulate and analyze network capabilities and whole-cell function under aerobic and anaerobic continuous cultures. Flux balance analysis (FBA and minimization of metabolic adjustment (MOMA were used as modeling frameworks. Results The metabolic network was reconstructed using the annotated genome sequence from L. lactis ssp. lactis IL1403 together with physiological and biochemical information. The established network comprised a total of 621 reactions and 509 metabolites, representing the overall metabolism of L. lactis. Experimental data reported in the literature was used to fit the model to phenotypic observations. Regulatory constraints had to be included to simulate certain metabolic features, such as the shift from homo to heterolactic fermentation. A minimal medium for in silico growth was identified, indicating the requirement of four amino acids in addition to a sugar. Remarkably, de novo biosynthesis of four other amino acids was observed even when all amino acids were supplied, which is in good agreement with experimental observations. Additionally, enhanced metabolic engineering strategies for improved diacetyl producing strains were designed. Conclusion The L. lactis metabolic network can now be used for a better understanding of lactococcal metabolic capabilities and potential, for the design of enhanced metabolic engineering strategies and for integration with other types of 'omic' data, to assist in finding new information on cellular organization and function.
Model of cosmology and particle physics at an intermediate scale
International Nuclear Information System (INIS)
Bastero-Gil, M.; Di Clemente, V.; King, S. F.
2005-01-01
We propose a model of cosmology and particle physics in which all relevant scales arise in a natural way from an intermediate string scale. We are led to assign the string scale to the intermediate scale M * ∼10 13 GeV by four independent pieces of physics: electroweak symmetry breaking; the μ parameter; the axion scale; and the neutrino mass scale. The model involves hybrid inflation with the waterfall field N being responsible for generating the μ term, the right-handed neutrino mass scale, and the Peccei-Quinn symmetry breaking scale. The large scale structure of the Universe is generated by the lightest right-handed sneutrino playing the role of a coupled curvaton. We show that the correct curvature perturbations may be successfully generated providing the lightest right-handed neutrino is weakly coupled in the seesaw mechanism, consistent with sequential dominance
Noise magnetic Barkahausen: modeling and scale
International Nuclear Information System (INIS)
Rodríguez-Pérez, Jorge L.; Pérez Benítez, José A.
2008-01-01
Noise magnetic Barkahausen of produces due to network defaults, and is reflected in abrupt changes that take place in the magnetization of the material in Studio. This fact presupposes a complexity, according to the various factors that influence its occurrence and internal changes in the system. A study of noise are used in three fundamental quantities: length the signal, the area under the curve and the energy of the signal; from these other quantities that are used often are defined: the square root mean (average-quadratic voltage) signal and the amplitude of the signal (maximum peak voltage). This form of investigate the phenomenon assumes a statistical analysis of the behaviour of the signal as a result of a set of changes that occur in the material, showing the complexity of the system and the importance of the laws of scale. This paper investigates the relationship between noise magnetic Barkahausen, laws of scale and complexity using structural steel ATSM 36 samples that have been subjected to mechanical deformations by traction and compression. For it's performed a statistical analysis to determine the complexity from the Test-appointment and reported the values of fundamental quantities and laws of scale for different deformation, resulting in the unit which shows the connection between the values of the voltage quadratic medium, the depth of the sample, the characteristics of the laws of scale and complexity: a pseudo random system.
The Goddard multi-scale modeling system with unified physics
Directory of Open Access Journals (Sweden)
W.-K. Tao
2009-08-01
Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.
This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.
Microphysics in Multi-scale Modeling System with Unified Physics
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
Scaling considerations for modeling the in situ vitrification process
International Nuclear Information System (INIS)
Langerman, M.A.; MacKinnon, R.J.
1990-09-01
Scaling relationships for modeling the in situ vitrification waste remediation process are documented based upon similarity considerations derived from fundamental principles. Requirements for maintaining temperature and electric potential field similarity between the model and the prototype are determined as well as requirements for maintaining similarity in off-gas generation rates. A scaling rationale for designing reduced-scale experiments is presented and the results are assessed numerically. 9 refs., 6 figs
Using LISREL to Evaluate Measurement Models and Scale Reliability.
Fleishman, John; Benson, Jeri
1987-01-01
LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…
Coulomb-gas scaling, superfluid films, and the XY model
International Nuclear Information System (INIS)
Minnhagen, P.; Nylen, M.
1985-01-01
Coulomb-gas-scaling ideas are invoked as a link between the superfluid density of two-dimensional 4 He films and the XY model; the Coulomb-gas-scaling function epsilon(X) is extracted from experiments and is compared with Monte Carlo simulations of the XY model. The agreement is found to be excellent
Measurement and Modelling of Scaling Minerals
DEFF Research Database (Denmark)
Villafafila Garcia, Ada
2005-01-01
-liquid equilibrium of sulphate scaling minerals (SrSO4, BaSO4, CaSO4 and CaSO4•2H2O) at temperatures up to 300ºC and pressures up to 1000 bar is described in chapter 4. Results for the binary systems (M2+, )-H2O; the ternary systems (Na+, M2+, )-H2O, and (Na+, M2+, Cl-)-H2O; and the quaternary systems (Na+, M2+)(Cl......-, )-H2O, are presented. M2+ stands for Ba2+, Ca2+, or Sr2+. Chapter 5 is devoted to the correlation and prediction of vapour-liquid-solid equilibria for different carbonate systems causing scale problems (CaCO3, BaCO3, SrCO3, and MgCO3), covering the temperature range from 0 to 250ºC and pressures up......-NaCl-Na2SO4-H2O are given. M2+ stands for Ca2+, Mg2+, Ba2+, and Sr2+. This chapter also includes an analysis of the CaCO3-MgCO3-CO2-H2O system. Chapter 6 deals with the system NaCl-H2O. Available data for that system at high temperatures and/or pressures are addressed, and sodium chloride solubility...
Macro scale models for freight railroad terminals.
2016-03-02
The project has developed a yard capacity model for macro-level analysis. The study considers the detailed sequence and scheduling in classification yards and their impacts on yard capacities simulate typical freight railroad terminals, and statistic...
Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution
Rajulapati, C. R.; Mujumdar, P. P.
2017-12-01
Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.
Scale gauge symmetry and the standard model
International Nuclear Information System (INIS)
Sola, J.
1990-01-01
This paper speculates on a version of the standard model of the electroweak and strong interactions coupled to gravity and equipped with a spontaneously broken, anomalous, conformal gauge symmetry. The scalar sector is virtually absent in the minimal model but in the general case it shows up in the form of a nonlinear harmonic map Lagrangian. A Euclidean approach to the phenological constant problem is also addressed in this framework
Large-scale modelling of neuronal systems
International Nuclear Information System (INIS)
Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.
2009-01-01
The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.
Multi-scale modeling for sustainable chemical production.
Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J
2013-09-01
With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The use of scale models in impact testing
International Nuclear Information System (INIS)
Donelan, P.J.; Dowling, A.R.
1985-01-01
Theoretical analysis, component testing and model flask testing are employed to investigate the validity of scale models for demonstrating the behaviour of Magnox flasks under impact conditions. Model testing is shown to be a powerful and convenient tool provided adequate care is taken with detail design and manufacture of models and with experimental control. (author)
Scale model helps Duke untie construction snags
International Nuclear Information System (INIS)
Anon.
1977-01-01
A nuclear power plant model, only 60 percent complete, has helped Duke Power identify over 150 major design interferences, which, when resolved, will help cut capital expense and eliminate scheduling problems that normally crop up as revisions are made during actual plant construction. The model has been used by construction, steam production, and design personnel to recommend changes that should improve material handling, operations, and maintenance procedures as well as simplifying piping and cabling. The company has already saved many man-hours in material take-off, material management, and detailed drafting and expects to save even more with greater use of, and improvement in, its modeling program. Duke's modeling program was authorized and became operational in November 1974, with the first model to be the Catawba Nuclear Station. This plant is a two-unit station using Westinghouse nuclear steam supply systems in tandem with General Electric turbine-generators, horizontal feedwater heaters, and Foster Wheeler triple pressure condensers. Each unit is rated 1142 MWe
Planck-scale corrections to axion models
International Nuclear Information System (INIS)
Barr, S.M.; Seckel, D.
1992-01-01
It has been argued that quantum gravitational effects will violate all nonlocal symmetries. Peccei-Quinn symmetries must therefore be an ''accidental'' or automatic consequence of local gauge symmetry. Moreover, higher-dimensional operators suppressed by powers of M Pl are expected to explicitly violate the Peccei-Quinn symmetry. Unless these operators are of dimension d≥10, axion models do not solve the strong CP problem in a natural fashion. A small gravitationally induced contribution to the axion mass has little if any effect on the density of relic axions. If d=10, 11, or 12 these operators can solve the axion domain-wall problem, and we describe a simple class of Kim-Shifman-Vainshtein-Zakharov axion models where this occurs. We also study the astrophysics and cosmology of ''heavy axions'' in models where 5≤d≤10
Scaling limit for the Derezi\\'nski-G\\'erard model
OHKUBO, Atsushi
2010-01-01
We consider a scaling limit for the Derezi\\'nski-G\\'erard model. We derive an effective potential by taking a scaling limit for the total Hamiltonian of the Derezi\\'nski-G\\'erard model. Our method to derive an effective potential is independent of whether or not the quantum field has a nonnegative mass. As an application of our theory developed in the present paper, we derive an effective potential of the Nelson model.
BLEVE overpressure: multi-scale comparison of blast wave modeling
International Nuclear Information System (INIS)
Laboureur, D.; Buchlin, J.M.; Rambaud, P.; Heymes, F.; Lapebie, E.
2014-01-01
BLEVE overpressure modeling has been already widely studied but only few validations including the scale effect have been made. After a short overview of the main models available in literature, a comparison is done with different scales of measurements, taken from previous studies or coming from experiments performed in the frame of this research project. A discussion on the best model to use in different cases is finally proposed. (authors)
Dynamically Scaled Model Experiment of a Mooring Cable
Directory of Open Access Journals (Sweden)
Lars Bergdahl
2016-01-01
Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.
Analysis of chromosome aberration data by hybrid-scale models
International Nuclear Information System (INIS)
Indrawati, Iwiq; Kumazawa, Shigeru
2000-02-01
This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)
Flavor gauge models below the Fermi scale
Babu, K. S.; Friedland, A.; Machado, P. A. N.; Mocioiu, I.
2017-12-01
The mass and weak interaction eigenstates for the quarks of the third generation are very well aligned, an empirical fact for which the Standard Model offers no explanation. We explore the possibility that this alignment is due to an additional gauge symmetry in the third generation. Specifically, we construct and analyze an explicit, renormalizable model with a gauge boson, X, corresponding to the B - L symmetry of the third family. Having a relatively light (in the MeV to multi-GeV range), flavor-nonuniversal gauge boson results in a variety of constraints from different sources. By systematically analyzing 20 different constraints, we identify the most sensitive probes: kaon, B +, D + and Upsilon decays, D-{\\overline{D}}^0 mixing, atomic parity violation, and neutrino scattering and oscillations. For the new gauge coupling g X in the range (10-2-10-4) the model is shown to be consistent with the data. Possible ways of testing the model in b physics, top and Z decays, direct collider production and neutrino oscillation experiments, where one can observe nonstandard matter effects, are outlined. The choice of leptons to carry the new force is ambiguous, resulting in additional phenomenological implications, such as non-universality in semileptonic bottom decays. The proposed framework provides interesting connections between neutrino oscillations, flavor and collider physics.
[Unfolding item response model using best-worst scaling].
Ikehara, Kazuya
2015-02-01
In attitude measurement and sensory tests, the unfolding model is typically used. In this model, response probability is formulated by the distance between the person and the stimulus. In this study, we proposed an unfolding item response model using best-worst scaling (BWU model), in which a person chooses the best and worst stimulus among repeatedly presented subsets of stimuli. We also formulated an unfolding model using best scaling (BU model), and compared the accuracy of estimates between the BU and BWU models. A simulation experiment showed that the BWU modell performed much better than the BU model in terms of bias and root mean square errors of estimates. With reference to Usami (2011), the proposed models were apllied to actual data to measure attitudes toward tardiness. Results indicated high similarity between stimuli estimates generated with the proposed models and those of Usami (2011).
Sizing and scaling requirements of a large-scale physical model for code validation
International Nuclear Information System (INIS)
Khaleel, R.; Legore, T.
1990-01-01
Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated
Pelamis wave energy converter. Verification of full-scale control using a 7th scale model
Energy Technology Data Exchange (ETDEWEB)
NONE
2005-07-01
The Pelamis Wave Energy Converter is a new concept for converting wave energy for several applications including generation of electric power. The machine is flexibly moored and swings to meet the water waves head-on. The system is semi-submerged and consists of cylindrical sections linked by hinges. The mechanical operation is described in outline. A one-seventh scale model was built and tested and the outcome was sufficiently successful to warrant the building of a full-scale prototype. In addition, a one-twentieth scale model was built and has contributed much to the research programme. The work is supported financially by the DTI.
Atomic-scale modeling of cellulose nanocrystals
Wu, Xiawa
Cellulose nanocrystals (CNCs), the most abundant nanomaterials in nature, are recognized as one of the most promising candidates to meet the growing demand of green, bio-degradable and sustainable nanomaterials for future applications. CNCs draw significant interest due to their high axial elasticity and low density-elasticity ratio, both of which are extensively researched over the years. In spite of the great potential of CNCs as functional nanoparticles for nanocomposite materials, a fundamental understanding of CNC properties and their role in composite property enhancement is not available. In this work, CNCs are studied using molecular dynamics simulation method to predict their material' behaviors in the nanoscale. (a) Mechanical properties include tensile deformation in the elastic and plastic regions using molecular mechanics, molecular dynamics and nanoindentation methods. This allows comparisons between the methods and closer connectivity to experimental measurement techniques. The elastic moduli in the axial and transverse directions are obtained and the results are found to be in good agreement with previous research. The ultimate properties in plastic deformation are reported for the first time and failure mechanism are analyzed in details. (b) The thermal expansion of CNC crystals and films are studied. It is proposed that CNC film thermal expansion is due primarily to single crystal expansion and CNC-CNC interfacial motion. The relative contributions of inter- and intra-crystal responses to heating are explored. (c) Friction at cellulose-CNCs and diamond-CNCs interfaces is studied. The effects of sliding velocity, normal load, and relative angle between sliding surfaces are predicted. The Cellulose-CNC model is analyzed in terms of hydrogen bonding effect, and the diamond-CNC model compliments some of the discussion of the previous model. In summary, CNC's material properties and molecular models are both studied in this research, contributing to
Sensitivities in global scale modeling of isoprene
Directory of Open Access Journals (Sweden)
R. von Kuhlmann
2004-01-01
Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9 Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.
Scaling of musculoskeletal models from static and dynamic trials
DEFF Research Database (Denmark)
Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark
2015-01-01
Subject-specific scaling of cadaver-based musculoskeletal models is important for accurate musculoskeletal analysis within multiple areas such as ergonomics, orthopaedics and occupational health. We present two procedures to scale ‘generic’ musculoskeletal models to match segment lengths and joint...... three scaling methods to an inverse dynamics-based musculoskeletal model and compared predicted knee joint contact forces to those measured with an instrumented prosthesis during gait. Additionally, a Monte Carlo study was used to investigate the sensitivity of the knee joint contact force to random...
MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS
Energy Technology Data Exchange (ETDEWEB)
Y.S. Wu
2005-08-24
This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on
MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM) MODELS
International Nuclear Information System (INIS)
Y.S. Wu
2005-01-01
This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on water and gas
Anomalous scaling in an age-dependent branching model
Keller-Schmidt, Stephanie; Tugrul, Murat; Eguiluz, Victor M.; Hernandez-Garcia, Emilio; Klemm, Konstantin
2010-01-01
We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age $\\tau$ as $\\tau^{-\\alpha}$. Depending on the exponent $\\alpha$, the scaling of tree depth with tree size $n$ displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition ($\\alpha=1$) tree depth grows as $(\\log n)^2$. This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus p...
Logarithmic corrections to scaling in the XY2-model
International Nuclear Information System (INIS)
Kenna, R.; Irving, A.C.
1995-01-01
We study the distribution of partition function zeroes for the XY-model in two dimensions. In particular we find the scaling behaviour of the end of the distribution of zeroes in the complex external magnetic field plane in the thermodynamic limit (the Yang-Lee edge) and the form for the density of these zeroes. Assuming that finite-size scaling holds, we show that there have to exist logarithmic corrections to the leading scaling behaviour of thermodynamic quantities in this model. These logarithmic corrections are also manifest in the finite-size scaling formulae and we identify them numerically. The method presented here can be used to check the compatibility of scaling behaviour of odd and even thermodynamic functions in other models too. ((orig.))
a Model Study of Small-Scale World Map Generalization
Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.
2018-04-01
With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.
Reference Priors for the General Location-Scale Model
Fernández, C.; Steel, M.F.J.
1997-01-01
The reference prior algorithm (Berger and Bernardo 1992) is applied to multivariate location-scale models with any regular sampling density, where we establish the irrelevance of the usual assumption of Normal sampling if our interest is in either the location or the scale. This result immediately
Penalized Estimation in Large-Scale Generalized Linear Array Models
DEFF Research Database (Denmark)
Lund, Adam; Vincent, Martin; Hansen, Niels Richard
2017-01-01
Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...
Atomic scale simulations for improved CRUD and fuel performance modeling
Energy Technology Data Exchange (ETDEWEB)
Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-01-06
A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.
Genome-scale modeling for metabolic engineering.
Simeonidis, Evangelos; Price, Nathan D
2015-03-01
We focus on the application of constraint-based methodologies and, more specifically, flux balance analysis in the field of metabolic engineering, and enumerate recent developments and successes of the field. We also review computational frameworks that have been developed with the express purpose of automatically selecting optimal gene deletions for achieving improved production of a chemical of interest. The application of flux balance analysis methods in rational metabolic engineering requires a metabolic network reconstruction and a corresponding in silico metabolic model for the microorganism in question. For this reason, we additionally present a brief overview of automated reconstruction techniques. Finally, we emphasize the importance of integrating metabolic networks with regulatory information-an area which we expect will become increasingly important for metabolic engineering-and present recent developments in the field of metabolic and regulatory integration.
Genome-scale biological models for industrial microbial systems.
Xu, Nan; Ye, Chao; Liu, Liming
2018-04-01
The primary aims and challenges associated with microbial fermentation include achieving faster cell growth, higher productivity, and more robust production processes. Genome-scale biological models, predicting the formation of an interaction among genetic materials, enzymes, and metabolites, constitute a systematic and comprehensive platform to analyze and optimize the microbial growth and production of biological products. Genome-scale biological models can help optimize microbial growth-associated traits by simulating biomass formation, predicting growth rates, and identifying the requirements for cell growth. With regard to microbial product biosynthesis, genome-scale biological models can be used to design product biosynthetic pathways, accelerate production efficiency, and reduce metabolic side effects, leading to improved production performance. The present review discusses the development of microbial genome-scale biological models since their emergence and emphasizes their pertinent application in improving industrial microbial fermentation of biological products.
Particles and scaling for lattice fields and Ising models
International Nuclear Information System (INIS)
Glimm, J.; Jaffe, A.
1976-01-01
The conjectured inequality GAMMA 6 4 -fields and the scaling limit for d-dimensional Ising models. Assuming GAMMA 6 = 6 these phi 4 fields are free fields unless the field strength renormalization Z -1 diverges. (orig./BJ) [de
Multi-scale modeling strategies in materials science—The ...
Indian Academy of Sciences (India)
Unknown
Multi-scale models; quasicontinuum method; finite elements. 1. Introduction ... boundary with external stresses, and the interaction of a lattice dislocation with a grain ..... mum value of se over the elements that touch node α. The acceleration of ...
Nonpointlike-parton model with asymptotic scaling and with scaling violationat moderate Q2 values
International Nuclear Information System (INIS)
Chen, C.K.
1981-01-01
A nonpointlike-parton model is formulated on the basis of the assumption of energy-independent total cross sections of partons and the current-algebra sum rules. No specific strong-interaction Lagrangian density is introduced in this approach. This model predicts asymptotic scaling for the inelastic structure functions of nucleons on the one hand and scaling violation at moderate Q 2 values on the other hand. The predicted scaling-violation patterns at moderate Q 2 values are consistent with the observed scaling-violation patterns. A numerical fit of F 2 functions is performed in order to demonstrate that the predicted scaling-violation patterns of this model at moderate Q 2 values fit the data, and to see how the predicted asymptotic scaling behavior sets in at various x values. Explicit analytic forms of F 2 functions are obtained from this numerical fit, and are compared in detail with the analytic forms of F 2 functions obtained from the numerical fit of the quantum-chromodynamics (QCD) parton model. This comparison shows that this nonpointlike-parton model fits the data better than the QCD parton model, especially at large and small x values. Nachtman moments are computed from the F 2 functions of this model and are shown to agree with data well. It is also shown that the two-dimensional plot of the logarithm of a nonsinglet moment versus the logarithm of another such moment is not a good way to distinguish this nonpointlike-parton model from the QCD parton model
Multi-scale modeling for sustainable chemical production
DEFF Research Database (Denmark)
Zhuang, Kai; Bakshi, Bhavik R.; Herrgard, Markus
2013-01-01
associated with the development and implementation of a su stainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow......With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes...... models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process...
Calibration of the Site-Scale Saturated Zone Flow Model
International Nuclear Information System (INIS)
Zyvoloski, G. A.
2001-01-01
The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)
3-3-1 models at electroweak scale
International Nuclear Information System (INIS)
Dias, Alex G.; Montero, J.C.; Pleitez, V.
2006-01-01
We show that in 3-3-1 models there exist a natural relation among the SU(3) L coupling constant g, the electroweak mixing angle θ W , the mass of the W, and one of the vacuum expectation values, which implies that those models can be realized at low energy scales and, in particular, even at the electroweak scale. So that, being that symmetries realized in Nature, new physics may be really just around the corner
International Nuclear Information System (INIS)
Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour
2007-01-01
Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will
Energy Technology Data Exchange (ETDEWEB)
Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour
2007-04-19
Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will
[Modeling continuous scaling of NDVI based on fractal theory].
Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng
2013-07-01
Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.
SCALING ANALYSIS OF REPOSITORY HEAT LOAD FOR REDUCED DIMENSIONALITY MODELS
International Nuclear Information System (INIS)
MICHAEL T. ITAMUA AND CLIFFORD K. HO
1998-01-01
The thermal energy released from the waste packages emplaced in the potential Yucca Mountain repository is expected to result in changes in the repository temperature, relative humidity, air mass fraction, gas flow rates, and other parameters that are important input into the models used to calculate the performance of the engineered system components. In particular, the waste package degradation models require input from thermal-hydrologic models that have higher resolution than those currently used to simulate the T/H responses at the mountain-scale. Therefore, a combination of mountain- and drift-scale T/H models is being used to generate the drift thermal-hydrologic environment
Scaling, soil moisture and evapotranspiration in runoff models
Wood, Eric F.
1993-01-01
The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in the land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, the probability distribution for evaporation is derived which illustrates the conditions for which scaling should work. A correction algorithm that may appropriate for the land parameterization of a GCM is derived using a 2nd order linearization scheme. The performance of the algorithm is evaluated.
Properties of Brownian Image Models in Scale-Space
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup
2003-01-01
Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...
Nucleon electric dipole moments in high-scale supersymmetric models
International Nuclear Information System (INIS)
Hisano, Junji; Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi
2015-01-01
The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.
Nucleon electric dipole moments in high-scale supersymmetric models
Energy Technology Data Exchange (ETDEWEB)
Hisano, Junji [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI),Nagoya University,Nagoya 464-8602 (Japan); Department of Physics, Nagoya University,Nagoya 464-8602 (Japan); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8584 (Japan); Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi [Department of Physics, Nagoya University,Nagoya 464-8602 (Japan)
2015-11-12
The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.
New phenomena in the standard no-scale supergravity model
Kelley, S; Nanopoulos, Dimitri V; Zichichi, Antonino; Kelley, S; Lopez, J L; Nanopoulos, D V; Zichichi, A
1994-01-01
We revisit the no-scale mechanism in the context of the simplest no-scale supergravity extension of the Standard Model. This model has the usual five-dimensional parameter space plus an additional parameter \\xi_{3/2}\\equiv m_{3/2}/m_{1/2}. We show how predictions of the model may be extracted over the whole parameter space. A necessary condition for the potential to be stable is {\\rm Str}{\\cal M}^4>0, which is satisfied if \\bf m_{3/2}\\lsim2 m_{\\tilde q}. Order of magnitude calculations reveal a no-lose theorem guaranteeing interesting and potentially observable new phenomena in the neutral scalar sector of the theory which would constitute a ``smoking gun'' of the no-scale mechanism. This new phenomenology is model-independent and divides into three scenarios, depending on the ratio of the weak scale to the vev at the minimum of the no-scale direction. We also calculate the residual vacuum energy at the unification scale (C_0\\, m^4_{3/2}), and find that in typical models one must require C_0>10. Such constrai...
Toward micro-scale spatial modeling of gentrification
O'Sullivan, David
A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.
Lovejoy, S.; del Rio Amador, L.; Hébert, R.
2015-09-01
On scales of ≈ 10 days (the lifetime of planetary-scale structures), there is a drastic transition from high-frequency weather to low-frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; thus, in GCM (general circulation model) macroweather forecasts, the weather is a high-frequency noise. However, neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developed that use empirical data to force the statistics and climate to be realistic so that even a two-parameter model can perform as well as GCMs for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the large stochastic memories that we quantify. Since macroweather temporal (but not spatial) intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the ScaLIng Macroweather Model (SLIMM). SLIMM is based on a stochastic ordinary differential equation, differing from usual linear stochastic models (such as the linear inverse modelling - LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes that there is no low-frequency memory, SLIMM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner, notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful stochastic forecasts of natural macroweather variability is to first remove the low-frequency anthropogenic component. A previous attempt to use fGn for forecasts had disappointing results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent
Lovejoy, S.; del Rio Amador, L.; Hébert, R.
2015-03-01
At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare
Description of Muzzle Blast by Modified Ideal Scaling Models
Directory of Open Access Journals (Sweden)
Kevin S. Fansler
1998-01-01
Full Text Available Gun blast data from a large variety of weapons are scaled and presented for both the instantaneous energy release and the constant energy deposition rate models. For both ideal explosion models, similar amounts of data scatter occur for the peak overpressure but the instantaneous energy release model correlated the impulse data significantly better, particularly for the region in front of the gun. Two parameters that characterize gun blast are used in conjunction with the ideal scaling models to improve the data correlation. The gun-emptying parameter works particularly well with the instantaneous energy release model to improve data correlation. In particular, the impulse, especially in the forward direction of the gun, is correlated significantly better using the instantaneous energy release model coupled with the use of the gun-emptying parameter. The use of the Mach disc location parameter improves the correlation only marginally. A predictive model is obtained from the modified instantaneous energy release correlation.
Modelling of evapotranspiration at field and landscape scales. Abstract
DEFF Research Database (Denmark)
Overgaard, Jesper; Butts, M.B.; Rosbjerg, Dan
2002-01-01
observations from a nearby weather station. Detailed land-use and soil maps were used to set up the model. Leaf area index was derived from NDVI (Normalized Difference Vegetation Index) images. To validate the model at field scale the simulated evapotranspiration rates were compared to eddy...
Role of scaling in the statistical modelling of finance
Indian Academy of Sciences (India)
Modelling the evolution of a financial index as a stochastic process is a problem awaiting a full, satisfactory solution since it was first formulated by Bachelier in 1900. Here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model.
Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries
DEFF Research Database (Denmark)
Prunescu, Remus Mihail
with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...
Appropriatie spatial scales to achieve model output uncertainty goals
Booij, Martijn J.; Melching, Charles S.; Chen, Xiaohong; Chen, Yongqin; Xia, Jun; Zhang, Hailun
2008-01-01
Appropriate spatial scales of hydrological variables were determined using an existing methodology based on a balance in uncertainties from model inputs and parameters extended with a criterion based on a maximum model output uncertainty. The original methodology uses different relationships between
Development of the Artistic Supervision Model Scale (ASMS)
Kapusuzoglu, Saduman; Dilekci, Umit
2017-01-01
The purpose of the study is to develop the Artistic Supervision Model Scale in accordance with the perception of inspectors and the elementary and secondary school teachers on artistic supervision. The lack of a measuring instrument related to the model of artistic supervision in the field of literature reveals the necessity of such study. 290…
Accounting for small scale heterogeneity in ecohydrologic watershed models
Burke, W.; Tague, C.
2017-12-01
Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach
Transdisciplinary application of the cross-scale resilience model
Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.
2014-01-01
The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.
Scale-free, axisymmetry galaxy models with little angular momentum
International Nuclear Information System (INIS)
Richstone, D.O.
1980-01-01
Two scale-free models of elliptical galaxies are constructed using a self-consistent field approach developed by Schwarschild. Both models have concentric, oblate spheroidal, equipotential surfaces, with a logarithmic potential dependence on central distance. The axial ratio of the equipotential surfaces is 4:3, and the extent ratio of density level surfaces id 2.5:1 (corresponding to an E6 galaxy). Each model satisfies the Poisson and steady state Boltzmann equaion for time scales of order 100 galactic years
Drift-Scale Coupled Processes (DST and THC Seepage) Models
International Nuclear Information System (INIS)
Sonnenthale, E.
2001-01-01
The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [1534471]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: Performance Assessment (PA); Near-Field Environment (NFE) PMR; Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); and UZ Flow and Transport Process Model Report (PMR). The work scope for this activity is presented in the TWPs cited above, and summarized as follows: Continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are
Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales
Directory of Open Access Journals (Sweden)
Yonghe Zhang
2010-11-01
Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.
Drift-Scale Coupled Processes (DST and THC Seepage) Models
International Nuclear Information System (INIS)
Dixon, P.
2004-01-01
The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Report is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC
Model Scaling of Hydrokinetic Ocean Renewable Energy Systems
von Ellenrieder, Karl; Valentine, William
2013-11-01
Numerical simulations are performed to validate a non-dimensional dynamic scaling procedure that can be applied to subsurface and deeply moored systems, such as hydrokinetic ocean renewable energy devices. The prototype systems are moored in water 400 m deep and include: subsurface spherical buoys moored in a shear current and excited by waves; an ocean current turbine excited by waves; and a deeply submerged spherical buoy in a shear current excited by strong current fluctuations. The corresponding model systems, which are scaled based on relative water depths of 10 m and 40 m, are also studied. For each case examined, the response of the model system closely matches the scaled response of the corresponding full-sized prototype system. The results suggest that laboratory-scale testing of complete ocean current renewable energy systems moored in a current is possible. This work was supported by the U.S. Southeast National Marine Renewable Energy Center (SNMREC).
Scale modeling of reinforced concrete structures subjected to seismic loading
International Nuclear Information System (INIS)
Dove, R.C.
1983-01-01
Reinforced concrete, Category I structures are so large that the possibility of seismicly testing the prototype structures under controlled conditions is essentially nonexistent. However, experimental data, from which important structural properties can be determined and existing and new methods of seismic analysis benchmarked, are badly needed. As a result, seismic experiments on scaled models are of considerable interest. In this paper, the scaling laws are developed in some detail so that assumptions and choices based on judgement can be clearly recognized and their effects discussed. The scaling laws developed are then used to design a reinforced concrete model of a Category I structure. Finally, how scaling is effected by various types of damping (viscous, structural, and Coulomb) is discussed
Empirical spatial econometric modelling of small scale neighbourhood
Gerkman, Linda
2012-07-01
The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.
Quantum critical scaling of fidelity in BCS-like model
International Nuclear Information System (INIS)
Adamski, Mariusz; Jedrzejewski, Janusz; Krokhmalskii, Taras
2013-01-01
We study scaling of the ground-state fidelity in neighborhoods of quantum critical points in a model of interacting spinful fermions—a BCS-like model. Due to the exact diagonalizability of the model, in one and higher dimensions, scaling of the ground-state fidelity can be analyzed numerically with great accuracy, not only for small systems but also for macroscopic ones, together with the crossover region between them. Additionally, in the one-dimensional case we have been able to derive a number of analytical formulas for fidelity and show that they accurately fit our numerical results; these results are reported in the paper. Besides regular critical points and their neighborhoods, where well-known scaling laws are obeyed, there is the multicritical point and critical points in its proximity where anomalous scaling behavior is found. We also consider scaling of fidelity in neighborhoods of critical points where fidelity oscillates strongly as the system size or the chemical potential is varied. Our results for a one-dimensional version of a BCS-like model are compared with those obtained recently by Rams and Damski in similar studies of a quantum spin chain—an anisotropic XY model in a transverse magnetic field. (paper)
Energy Technology Data Exchange (ETDEWEB)
Kubo, Jisuke [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Yamada, Masatoshi [Department of Physics, Kyoto University,Kyoto 606-8502 (Japan); Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, 69120 Heidelberg (Germany)
2016-12-01
We assume that the origin of the electroweak (EW) scale is a gauge-invariant scalar-bilinear condensation in a strongly interacting non-abelian gauge sector, which is connected to the standard model via a Higgs portal coupling. The dynamical scale genesis appears as a phase transition at finite temperature, and it can produce a gravitational wave (GW) background in the early Universe. We find that the critical temperature of the scale phase transition lies above that of the EW phase transition and below few O(100) GeV and it is strongly first-order. We calculate the spectrum of the GW background and find the scale phase transition is strong enough that the GW background can be observed by DECIGO.
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
Anomalous scaling in an age-dependent branching model.
Keller-Schmidt, Stephanie; Tuğrul, Murat; Eguíluz, Víctor M; Hernández-García, Emilio; Klemm, Konstantin
2015-02-01
We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age τ as τ(-α). Depending on the exponent α, the scaling of tree depth with tree size n displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition (α=1) tree depth grows as (logn)(2). This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus providing a theoretical support for age-dependent speciation and associating it to the occurrence of a critical point.
Macro-scale turbulence modelling for flows in porous media
International Nuclear Information System (INIS)
Pinson, F.
2006-03-01
- This work deals with the macroscopic modeling of turbulence in porous media. It concerns heat exchangers, nuclear reactors as well as urban flows, etc. The objective of this study is to describe in an homogenized way, by the mean of a spatial average operator, turbulent flows in a solid matrix. In addition to this first operator, the use of a statistical average operator permits to handle the pseudo-aleatory character of turbulence. The successive application of both operators allows us to derive the balance equations of the kind of flows under study. Two major issues are then highlighted, the modeling of dispersion induced by the solid matrix and the turbulence modeling at a macroscopic scale (Reynolds tensor and turbulent dispersion). To this aim, we lean on the local modeling of turbulence and more precisely on the k - ε RANS models. The methodology of dispersion study, derived thanks to the volume averaging theory, is extended to turbulent flows. Its application includes the simulation, at a microscopic scale, of turbulent flows within a representative elementary volume of the porous media. Applied to channel flows, this analysis shows that even within the turbulent regime, dispersion remains one of the dominating phenomena within the macro-scale modeling framework. A two-scale analysis of the flow allows us to understand the dominating role of the drag force in the kinetic energy transfers between scales. Transfers between the mean part and the turbulent part of the flow are formally derived. This description significantly improves our understanding of the issue of macroscopic modeling of turbulence and leads us to define the sub-filter production and the wake dissipation. A f - f - w >f model is derived. It is based on three balance equations for the turbulent kinetic energy, the viscous dissipation and the wake dissipation. Furthermore, a dynamical predictor for the friction coefficient is proposed. This model is then successfully applied to the study of
Multi Scale Models for Flexure Deformation in Sheet Metal Forming
Directory of Open Access Journals (Sweden)
Di Pasquale Edmondo
2016-01-01
Full Text Available This paper presents the application of multi scale techniques to the simulation of sheet metal forming using the one-step method. When a blank flows over the die radius, it undergoes a complex cycle of bending and unbending. First, we describe an original model for the prediction of residual plastic deformation and stresses in the blank section. This model, working on a scale about one hundred times smaller than the element size, has been implemented in SIMEX, one-step sheet metal forming simulation code. The utilisation of this multi-scale modeling technique improves greatly the accuracy of the solution. Finally, we discuss the implications of this analysis on the prediction of springback in metal forming.
Scaling of Core Material in Rubble Mound Breakwater Model Tests
DEFF Research Database (Denmark)
Burcharth, H. F.; Liu, Z.; Troch, P.
1999-01-01
The permeability of the core material influences armour stability, wave run-up and wave overtopping. The main problem related to the scaling of core materials in models is that the hydraulic gradient and the pore velocity are varying in space and time. This makes it impossible to arrive at a fully...... correct scaling. The paper presents an empirical formula for the estimation of the wave induced pressure gradient in the core, based on measurements in models and a prototype. The formula, together with the Forchheimer equation can be used for the estimation of pore velocities in cores. The paper proposes...... that the diameter of the core material in models is chosen in such a way that the Froude scale law holds for a characteristic pore velocity. The characteristic pore velocity is chosen as the average velocity of a most critical area in the core with respect to porous flow. Finally the method is demonstrated...
Validity of the Neuromuscular Recovery Scale: a measurement model approach.
Velozo, Craig; Moorhouse, Michael; Ardolino, Elizabeth; Lorenz, Doug; Suter, Sarah; Basso, D Michele; Behrman, Andrea L
2015-08-01
To determine how well the Neuromuscular Recovery Scale (NRS) items fit the Rasch, 1-parameter, partial-credit measurement model. Confirmatory factor analysis (CFA) and principal components analysis (PCA) of residuals were used to determine dimensionality. The Rasch, 1-parameter, partial-credit rating scale model was used to determine rating scale structure, person/item fit, point-measure item correlations, item discrimination, and measurement precision. Seven NeuroRecovery Network clinical sites. Outpatients (N=188) with spinal cord injury. Not applicable. NRS. While the NRS met 1 of 3 CFA criteria, the PCA revealed that the Rasch measurement dimension explained 76.9% of the variance. Ten of 11 items and 91% of the patients fit the Rasch model, with 9 of 11 items showing high discrimination. Sixty-nine percent of the ratings met criteria. The items showed a logical item-difficulty order, with Stand retraining as the easiest item and Walking as the most challenging item. The NRS showed no ceiling or floor effects and separated the sample into almost 5 statistically distinct strata; individuals with an American Spinal Injury Association Impairment Scale (AIS) D classification showed the most ability, and those with an AIS A classification showed the least ability. Items not meeting the rating scale criteria appear to be related to the low frequency counts. The NRS met many of the Rasch model criteria for construct validity. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
A Network Contention Model for the Extreme-scale Simulator
Energy Technology Data Exchange (ETDEWEB)
Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL
2015-01-01
The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.
Use of genome-scale microbial models for metabolic engineering
DEFF Research Database (Denmark)
Patil, Kiran Raosaheb; Åkesson, M.; Nielsen, Jens
2004-01-01
Metabolic engineering serves as an integrated approach to design new cell factories by providing rational design procedures and valuable mathematical and experimental tools. Mathematical models have an important role for phenotypic analysis, but can also be used for the design of optimal metaboli...... network structures. The major challenge for metabolic engineering in the post-genomic era is to broaden its design methodologies to incorporate genome-scale biological data. Genome-scale stoichiometric models of microorganisms represent a first step in this direction....
Wind Farm Wake Models From Full Scale Data
DEFF Research Database (Denmark)
Knudsen, Torben; Bak, Thomas
2012-01-01
This investigation is part of the EU FP7 project “Distributed Control of Large-Scale Offshore Wind Farms”. The overall goal in this project is to develop wind farm controllers giving power set points to individual turbines in the farm in order to minimise mechanical loads and optimise power. One...... on real full scale data. The modelling is based on so called effective wind speed. It is shown that there is a wake for a wind direction range of up to 20 degrees. Further, when accounting for the wind direction it is shown that the two model structures considered can both fit the experimental data...
Ground-water solute transport modeling using a three-dimensional scaled model
International Nuclear Information System (INIS)
Crider, S.S.
1987-01-01
Scaled models are used extensively in current hydraulic research on sediment transport and solute dispersion in free surface flows (rivers, estuaries), but are neglected in current ground-water model research. Thus, an investigation was conducted to test the efficacy of a three-dimensional scaled model of solute transport in ground water. No previous results from such a model have been reported. Experiments performed on uniform scaled models indicated that some historical problems (e.g., construction and scaling difficulties; disproportionate capillary rise in model) were partly overcome by using simple model materials (sand, cement and water), by restricting model application to selective classes of problems, and by physically controlling the effect of the model capillary zone. Results from these tests were compared with mathematical models. Model scaling laws were derived for ground-water solute transport and used to build a three-dimensional scaled model of a ground-water tritium plume in a prototype aquifer on the Savannah River Plant near Aiken, South Carolina. Model results compared favorably with field data and with a numerical model. Scaled models are recommended as a useful additional tool for prediction of ground-water solute transport
Atomic scale modelling of materials of the nuclear fuel cycle
International Nuclear Information System (INIS)
Bertolus, M.
2011-10-01
This document written to obtain the French accreditation to supervise research presents the research I conducted at CEA Cadarache since 1999 on the atomic scale modelling of non-metallic materials involved in the nuclear fuel cycle: host materials for radionuclides from nuclear waste (apatites), fuel (in particular uranium dioxide) and ceramic cladding materials (silicon carbide). These are complex materials at the frontier of modelling capabilities since they contain heavy elements (rare earths or actinides), exhibit complex structures or chemical compositions and/or are subjected to irradiation effects: creation of point defects and fission products, amorphization. The objective of my studies is to bring further insight into the physics and chemistry of the elementary processes involved using atomic scale modelling and its coupling with higher scale models and experimental studies. This work is organised in two parts: on the one hand the development, adaptation and implementation of atomic scale modelling methods and validation of the approximations used; on the other hand the application of these methods to the investigation of nuclear materials under irradiation. This document contains a synthesis of the studies performed, orientations for future research, a detailed resume and a list of publications and communications. (author)
Spatiotemporal exploratory models for broad-scale survey data.
Fink, Daniel; Hochachka, Wesley M; Zuckerberg, Benjamin; Winkler, David W; Shaby, Ben; Munson, M Arthur; Hooker, Giles; Riedewald, Mirek; Sheldon, Daniel; Kelling, Steve
2010-12-01
The distributions of animal populations change and evolve through time. Migratory species exploit different habitats at different times of the year. Biotic and abiotic features that determine where a species lives vary due to natural and anthropogenic factors. This spatiotemporal variation needs to be accounted for in any modeling of species' distributions. In this paper we introduce a semiparametric model that provides a flexible framework for analyzing dynamic patterns of species occurrence and abundance from broad-scale survey data. The spatiotemporal exploratory model (STEM) adds essential spatiotemporal structure to existing techniques for developing species distribution models through a simple parametric structure without requiring a detailed understanding of the underlying dynamic processes. STEMs use a multi-scale strategy to differentiate between local and global-scale spatiotemporal structure. A user-specified species distribution model accounts for spatial and temporal patterning at the local level. These local patterns are then allowed to "scale up" via ensemble averaging to larger scales. This makes STEMs especially well suited for exploring distributional dynamics arising from a variety of processes. Using data from eBird, an online citizen science bird-monitoring project, we demonstrate that monthly changes in distribution of a migratory species, the Tree Swallow (Tachycineta bicolor), can be more accurately described with a STEM than a conventional bagged decision tree model in which spatiotemporal structure has not been imposed. We also demonstrate that there is no loss of model predictive power when a STEM is used to describe a spatiotemporal distribution with very little spatiotemporal variation; the distribution of a nonmigratory species, the Northern Cardinal (Cardinalis cardinalis).
Scaling and percolation in the small-world network model
Energy Technology Data Exchange (ETDEWEB)
Newman, M. E. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States); Watts, D. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States)
1999-12-01
In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society.
Scaling and percolation in the small-world network model
International Nuclear Information System (INIS)
Newman, M. E. J.; Watts, D. J.
1999-01-01
In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society
Multilevel method for modeling large-scale networks.
Energy Technology Data Exchange (ETDEWEB)
Safro, I. M. (Mathematics and Computer Science)
2012-02-24
Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
Two-dimensional divertor modeling and scaling laws
International Nuclear Information System (INIS)
Catto, P.J.; Connor, J.W.; Knoll, D.A.
1996-01-01
Two-dimensional numerical models of divertors contain large numbers of dimensionless parameters that must be varied to investigate all operating regimes of interest. To simplify the task and gain insight into divertor operation, we employ similarity techniques to investigate whether model systems of equations plus boundary conditions in the steady state admit scaling transformations that lead to useful divertor similarity scaling laws. A short mean free path neutral-plasma model of the divertor region below the x-point is adopted in which all perpendicular transport is due to the neutrals. We illustrate how the results can be used to benchmark large computer simulations by employing a modified version of UEDGE which contains a neutral fluid model. (orig.)
Active Learning of Classification Models with Likert-Scale Feedback.
Xue, Yanbing; Hauskrecht, Milos
2017-01-01
Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.
Multi-scale Modeling of Plasticity in Tantalum.
Energy Technology Data Exchange (ETDEWEB)
Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weinberger, Christopher [Drexel Univ., Philadelphia, PA (United States)
2015-12-01
In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct
International Nuclear Information System (INIS)
Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B
2013-01-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)
Optogenetic stimulation of a meso-scale human cortical model
Selvaraj, Prashanth; Szeri, Andrew; Sleigh, Jamie; Kirsch, Heidi
2015-03-01
Neurological phenomena like sleep and seizures depend not only on the activity of individual neurons, but on the dynamics of neuron populations as well. Meso-scale models of cortical activity provide a means to study neural dynamics at the level of neuron populations. Additionally, they offer a safe and economical way to test the effects and efficacy of stimulation techniques on the dynamics of the cortex. Here, we use a physiologically relevant meso-scale model of the cortex to study the hypersynchronous activity of neuron populations during epileptic seizures. The model consists of a set of stochastic, highly non-linear partial differential equations. Next, we use optogenetic stimulation to control seizures in a hyperexcited cortex, and to induce seizures in a normally functioning cortex. The high spatial and temporal resolution this method offers makes a strong case for the use of optogenetics in treating meso scale cortical disorders such as epileptic seizures. We use bifurcation analysis to investigate the effect of optogenetic stimulation in the meso scale model, and its efficacy in suppressing the non-linear dynamics of seizures.
Small-Scale Helicopter Automatic Autorotation : Modeling, Guidance, and Control
Taamallah, S.
2015-01-01
Our research objective consists in developing a, model-based, automatic safety recovery system, for a small-scale helicopter Unmanned Aerial Vehicle (UAV) in autorotation, i.e. an engine OFF flight condition, that safely flies and lands the helicopter to a pre-specified ground location. In pursuit
Phenomenological aspects of no-scale inflation models
Energy Technology Data Exchange (ETDEWEB)
Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics,King’s College London,WC2R 2LS London (United Kingdom); Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Garcia, Marcos A.G. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics andAstronomy, Texas A& M University,College Station, 77843 Texas (United States); Astroparticle Physics Group, Houston Advanced Research Center (HARC), Mitchell Campus, Woodlands, 77381 Texas (United States); Academy of Athens, Division of Natural Sciences, 28 Panepistimiou Avenue, 10679 Athens (Greece); Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States)
2015-10-01
We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0}=B{sub 0}=A{sub 0}=0, of the CMSSM type with universal A{sub 0} and m{sub 0}≠0 at a high scale, and of the mSUGRA type with A{sub 0}=B{sub 0}+m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2}≠0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.
Modeling and simulation in tribology across scales: An overview
DEFF Research Database (Denmark)
Vakis, A.I.; Yastrebov, V.A.; Scheibert, J.
2018-01-01
theories at the nano- and micro-scales, as well as multiscale and multiphysics aspects for analytical and computational models relevant to applications spanning a variety of sectors, from automotive to biotribology and nanotechnology. Significant effort is still required to account for complementary...
Large scale solar district heating. Evaluation, modelling and designing - Appendices
Energy Technology Data Exchange (ETDEWEB)
Heller, A.
2000-07-01
The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)
Vegetable parenting practices scale: Item response modeling analyses
Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...
Scale-invariant inclusive spectra in a dual model
International Nuclear Information System (INIS)
Chikovani, Z.E.; Jenkovsky, L.L.; Martynov, E.S.
1979-01-01
One-particle inclusive distributions at large transverse momentum phisub(tr) are shown to scale, Edσ/d 3 phi approximately phisub(tr)sup(-N)(1-Xsub(tr))sup(1+N/2)lnphisub(tr), in a dual model with Mandelstam analyticity if the Regge trajectories are logarithmic asymptotically
Learning in an estimated medium-scale DSGE model
Czech Academy of Sciences Publication Activity Database
Slobodyan, Sergey; Wouters, R.
2012-01-01
Roč. 36, č. 1 (2012), s. 26-46 ISSN 0165-1889 R&D Projects: GA ČR(CZ) GCP402/11/J018 Institutional support: PRVOUK-P23 Keywords : constant-gain adaptive learning * medium-scale DSGE model * DSGE- VAR Subject RIV: AH - Economics Impact factor: 0.807, year: 2012
Directory of Open Access Journals (Sweden)
Laura Casas
Full Text Available The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude × nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype, those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here. We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dose-dependent effect probably due to a concerted action of multiple pathways involved in scale formation.
Casas, Laura; Szűcs, Ré ka; Vij, Shubha; Goh, Chin Heng; Kathiresan, Purushothaman; Né meth, Sá ndor; Jeney, Zsigmond; Bercsé nyi, Mikló s; Orbá n, Lá szló
2013-01-01
The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.
Casas, Laura
2013-12-30
The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the \\'S\\' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called \\'N\\' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov\\'s work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.
Including investment risk in large-scale power market models
DEFF Research Database (Denmark)
Lemming, Jørgen Kjærgaard; Meibom, P.
2003-01-01
Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...
Application of physical scaling towards downscaling climate model precipitation data
Gaur, Abhishek; Simonovic, Slobodan P.
2018-04-01
Physical scaling (SP) method downscales climate model data to local or regional scales taking into consideration physical characteristics of the area under analysis. In this study, multiple SP method based models are tested for their effectiveness towards downscaling North American regional reanalysis (NARR) daily precipitation data. Model performance is compared with two state-of-the-art downscaling methods: statistical downscaling model (SDSM) and generalized linear modeling (GLM). The downscaled precipitation is evaluated with reference to recorded precipitation at 57 gauging stations located within the study region. The spatial and temporal robustness of the downscaling methods is evaluated using seven precipitation based indices. Results indicate that SP method-based models perform best in downscaling precipitation followed by GLM, followed by the SDSM model. Best performing models are thereafter used to downscale future precipitations made by three global circulation models (GCMs) following two emission scenarios: representative concentration pathway (RCP) 2.6 and RCP 8.5 over the twenty-first century. The downscaled future precipitation projections indicate an increase in mean and maximum precipitation intensity as well as a decrease in the total number of dry days. Further an increase in the frequency of short (1-day), moderately long (2-4 day), and long (more than 5-day) precipitation events is projected.
Modelling Planck-scale Lorentz violation via analogue models
International Nuclear Information System (INIS)
Weinfurtner, Silke; Liberati, Stefano; Visser, Matt
2006-01-01
Astrophysical tests of Planck-suppressed Lorentz violations had been extensively studied in recent years and very stringent constraints have been obtained within the framework of effective field theory. There are however still some unresolved theoretical issues, in particular regarding the so called 'naturalness problem' - which arises when postulating that Planck suppressed Lorentz violations arise only from operators with mass dimension greater than four in the Lagrangian. In the work presented here we shall try to address this problem by looking at a condensed-matter analogue of the Lorentz violations considered in quantum gravity phenomenology. specifically, we investigate the class of two-component BECs subject to laserinduced transitions between the two components, and we show that this model is an example for Lorentz invariance violation due to ultraviolet physics. We shall show that such a model can be considered to be an explicit example high-energy Lorentz violations where the 'naturalness problem' does not arise
Modeling and Simulation Techniques for Large-Scale Communications Modeling
National Research Council Canada - National Science Library
Webb, Steve
1997-01-01
.... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.
Drift-Scale Coupled Processes (DST and THC Seepage) Models
Energy Technology Data Exchange (ETDEWEB)
E. Gonnenthal; N. Spyoher
2001-02-05
The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [153447]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: (1) Performance Assessment (PA); (2) Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); (3) UZ Flow and Transport Process Model Report (PMR); and (4) Near-Field Environment (NFE) PMR. The work scope for this activity is presented in the TWPs cited above, and summarized as follows: continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data
New Models and Methods for the Electroweak Scale
Energy Technology Data Exchange (ETDEWEB)
Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics
2017-09-26
This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac
Chemical theory and modelling through density across length scales
International Nuclear Information System (INIS)
Ghosh, Swapan K.
2016-01-01
One of the concepts that has played a major role in the conceptual as well as computational developments covering all the length scales of interest in a number of areas of chemistry, physics, chemical engineering and materials science is the concept of single-particle density. Density functional theory has been a versatile tool for the description of many-particle systems across length scales. Thus, in the microscopic length scale, an electron density based description has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids. Density concept has been used in the form of single particle number density in the intermediate mesoscopic length scale to obtain an appropriate picture of the equilibrium and dynamical processes, dealing with a wide class of problems involving interfacial science and soft condensed matter. In the macroscopic length scale, however, matter is usually treated as a continuous medium and a description using local mass density, energy density and other related property density functions has been found to be quite appropriate. The basic ideas underlying the versatile uses of the concept of density in the theory and modelling of materials and phenomena, as visualized across length scales, along with selected illustrative applications to some recent areas of research on hydrogen energy, soft matter, nucleation phenomena, isotope separation, and separation of mixture in condensed phase, will form the subject matter of the talk. (author)
Extending SME to Handle Large-Scale Cognitive Modeling.
Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre
2017-07-01
Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.
A Lagrangian dynamic subgrid-scale model turbulence
Meneveau, C.; Lund, T. S.; Cabot, W.
1994-01-01
A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.
The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...
Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales
International Nuclear Information System (INIS)
Krstic, Predrag S.
2014-01-01
Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.
Modeling fast and slow earthquakes at various scales.
Ide, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.
Testing of materials and scale models for impact limiters
International Nuclear Information System (INIS)
Maji, A.K.; Satpathi, D.; Schryer, H.L.
1991-01-01
Aluminum Honeycomb and Polyurethane foam specimens were tested to obtain experimental data on the material's behavior under different loading conditions. This paper reports the dynamic tests conducted on the materials and on the design and testing of scale models made out of these open-quotes Impact Limiters,close quotes as they are used in the design of transportation casks. Dynamic tests were conducted on a modified Charpy Impact machine with associated instrumentation, and compared with static test results. A scale model testing setup was designed and used for preliminary tests on models being used by current designers of transportation casks. The paper presents preliminary results of the program. Additional information will be available and reported at the time of presentation of the paper
Coalescing colony model: Mean-field, scaling, and geometry
Carra, Giulia; Mallick, Kirone; Barthelemy, Marc
2017-12-01
We analyze the coalescing model where a `primary' colony grows and randomly emits secondary colonies that spread and eventually coalesce with it. This model describes population proliferation in theoretical ecology, tumor growth, and is also of great interest for modeling urban sprawl. Assuming the primary colony to be always circular of radius r (t ) and the emission rate proportional to r (t) θ , where θ >0 , we derive the mean-field equations governing the dynamics of the primary colony, calculate the scaling exponents versus θ , and compare our results with numerical simulations. We then critically test the validity of the circular approximation for the colony shape and show that it is sound for a constant emission rate (θ =0 ). However, when the emission rate is proportional to the perimeter, the circular approximation breaks down and the roughness of the primary colony cannot be discarded, thus modifying the scaling exponents.
Lepton Dipole Moments in Supersymmetric Low-Scale Seesaw Models
Ilakovac, Amon; Popov, Luka
2014-01-01
We study the anomalous magnetic and electric dipole moments of charged leptons in supersymmetric low-scale seesaw models with right-handed neutrino superfields. We consider a minimally extended framework of minimal supergravity, by assuming that CP violation originates from complex soft SUSY-breaking bilinear and trilinear couplings associated with the right-handed sneutrino sector. We present numerical estimates of the muon anomalous magnetic moment and the electron electric dipole moment (EDM), as functions of key model parameters, such as the Majorana mass scale mN and tan(\\beta). In particular, we find that the contributions of the singlet heavy neutrinos and sneutrinos to the electron EDM are naturally small in this model, of order 10^{-27} - 10^{-28} e cm, and can be probed in the present and future experiments.
Multiresolution comparison of precipitation datasets for large-scale models
Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.
2014-12-01
Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.
Utilization of Large Scale Surface Models for Detailed Visibility Analyses
Caha, J.; Kačmařík, M.
2017-11-01
This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.
Multi-scale climate modelling over Southern Africa using a variable-resolution global model
CSIR Research Space (South Africa)
Engelbrecht, FA
2011-12-01
Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...
Performance prediction of industrial centrifuges using scale-down models.
Boychyn, M; Yim, S S S; Bulmer, M; More, J; Bracewell, D G; Hoare, M
2004-12-01
Computational fluid dynamics was used to model the high flow forces found in the feed zone of a multichamber-bowl centrifuge and reproduce these in a small, high-speed rotating disc device. Linking the device to scale-down centrifugation, permitted good estimation of the performance of various continuous-flow centrifuges (disc stack, multichamber bowl, CARR Powerfuge) for shear-sensitive protein precipitates. Critically, the ultra scale-down centrifugation process proved to be a much more accurate predictor of production multichamber-bowl performance than was the pilot centrifuge.
Design and Modelling of Small Scale Low Temperature Power Cycles
DEFF Research Database (Denmark)
Wronski, Jorrit
he work presented in this report contributes to the state of the art within design and modelling of small scale low temperature power cycles. The study is divided into three main parts: (i) fluid property evaluation, (ii) expansion device investigations and (iii) heat exchanger performance......-oriented Modelica code and was included in the thermo Cycle framework for small scale ORC systems. Special attention was paid to the valve system and a control method for variable expansion ratios was introduced based on a cogeneration scenario. Admission control based on evaporator and condenser conditions...
Matrix models, Argyres-Douglas singularities and double scaling limits
International Nuclear Information System (INIS)
Bertoldi, Gaetano
2003-01-01
We construct an N = 1 theory with gauge group U(nN) and degree n+1 tree level superpotential whose matrix model spectral curve develops an Argyres-Douglas singularity. The calculation of the tension of domain walls in the U(nN) theory shows that the standard large-N expansion breaks down at the Argyres-Douglas points, with tension that scales as a fractional power of N. Nevertheless, it is possible to define appropriate double scaling limits which are conjectured to yield the tension of 2-branes in the resulting N = 1 four dimensional non-critical string theories as proposed by Ferrari. (author)
Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale
Sobolev, S. V.; Muldashev, I. A.
2015-12-01
Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the
Y-Scaling in a simple quark model
International Nuclear Information System (INIS)
Kumano, S.; Moniz, E.J.
1988-01-01
A simple quark model is used to define a nuclear pair model, that is, two composite hadrons interacting only through quark interchange and bound in an overall potential. An ''equivalent'' hadron model is developed, displaying an effective hadron-hadron interaction which is strongly repulsive. We compare the effective hadron model results with the exact quark model observables in the kinematic region of large momentum transfer, small energy transfer. The nucleon reponse function in this y-scaling region is, within the traditional frame work sensitive to the nucleon momentum distribution at large momentum. We find a surprizingly small effect of hadron substructure. Furthermore, we find in our model that a simple parametrization of modified hadron size in the bound state, motivated by the bound quark momentum distribution, is not a useful way to correlate different observables
Site-scale groundwater flow modelling of Beberg
Energy Technology Data Exchange (ETDEWEB)
Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden); Walker, D. [Duke Engineering and Services (United States); Hartley, L. [AEA Technology, Harwell (United Kingdom)
1999-08-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of {epsilon}{sub f} 10{sup -4}, and a flow-wetted surface of a{sub r} = 1.0 m{sup 2}/(m{sup 3} rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10{sup -3} m/year. The median F-ratio is 5.6 x 10{sup 5} year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates
Hydrological Modelling of Small Scale Processes in a Wetland Habitat
DEFF Research Database (Denmark)
Johansen, Ole; Jensen, Jacob Birk; Pedersen, Morten Lauge
2009-01-01
Numerical modelling of the hydrology in a Danish rich fen area has been conducted. By collecting various data in the field the model has been successfully calibrated and the flow paths as well as the groundwater discharge distribution have been simulated in details. The results of this work have...... shown that distributed numerical models can be applied to local scale problems and that natural springs, ditches, the geological conditions as well as the local topographic variations have a significant influence on the flow paths in the examined rich fen area....
Site-scale groundwater flow modelling of Beberg
International Nuclear Information System (INIS)
Gylling, B.; Walker, D.; Hartley, L.
1999-08-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of ε f 10 -4 , and a flow-wetted surface of a r = 1.0 m 2 /(m 3 rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10 -3 m/year. The median F-ratio is 5.6 x 10 5 year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates that the change in hydraulic gradient
A No-Scale Inflationary Model to Fit Them All
Ellis, John; Nanopoulos, Dimitri; Olive, Keith
2014-01-01
The magnitude of B-mode polarization in the cosmic microwave background as measured by BICEP2 favours models of chaotic inflation with a quadratic $m^2 \\phi^2/2$ potential, whereas data from the Planck satellite favour a small value of the tensor-to-scalar perturbation ratio $r$ that is highly consistent with the Starobinsky $R + R^2$ model. Reality may lie somewhere between these two scenarios. In this paper we propose a minimal two-field no-scale supergravity model that interpolates between quadratic and Starobinsky-like inflation as limiting cases, while retaining the successful prediction $n_s \\simeq 0.96$.
Hessel, R.; Tenge, A.J.M.
2008-01-01
To reduce soil erosion, soil and water conservation (SWC) methods are often used. However, no method exists to model beforehand how implementing such measures will affect erosion at catchment scale. A method was developed to simulate the effects of SWC measures with catchment scale erosion models.
Scaling analysis and model estimation of solar corona index
Ray, Samujjwal; Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik
2018-04-01
A monthly average solar green coronal index time series for the period from January 1939 to December 2008 collected from NOAA (The National Oceanic and Atmospheric Administration) has been analysed in this paper in perspective of scaling analysis and modelling. Smoothed and de-noising have been done using suitable mother wavelet as a pre-requisite. The Finite Variance Scaling Method (FVSM), Higuchi method, rescaled range (R/S) and a generalized method have been applied to calculate the scaling exponents and fractal dimensions of the time series. Autocorrelation function (ACF) is used to find autoregressive (AR) process and Partial autocorrelation function (PACF) has been used to get the order of AR model. Finally a best fit model has been proposed using Yule-Walker Method with supporting results of goodness of fit and wavelet spectrum. The results reveal an anti-persistent, Short Range Dependent (SRD), self-similar property with signatures of non-causality, non-stationarity and nonlinearity in the data series. The model shows the best fit to the data under observation.
A high-resolution global-scale groundwater model
de Graaf, I. E. M.; Sutanudjaja, E. H.; van Beek, L. P. H.; Bierkens, M. F. P.
2015-02-01
Groundwater is the world's largest accessible source of fresh water. It plays a vital role in satisfying basic needs for drinking water, agriculture and industrial activities. During times of drought groundwater sustains baseflow to rivers and wetlands, thereby supporting ecosystems. Most global-scale hydrological models (GHMs) do not include a groundwater flow component, mainly due to lack of geohydrological data at the global scale. For the simulation of lateral flow and groundwater head dynamics, a realistic physical representation of the groundwater system is needed, especially for GHMs that run at finer resolutions. In this study we present a global-scale groundwater model (run at 6' resolution) using MODFLOW to construct an equilibrium water table at its natural state as the result of long-term climatic forcing. The used aquifer schematization and properties are based on available global data sets of lithology and transmissivities combined with the estimated thickness of an upper, unconfined aquifer. This model is forced with outputs from the land-surface PCRaster Global Water Balance (PCR-GLOBWB) model, specifically net recharge and surface water levels. A sensitivity analysis, in which the model was run with various parameter settings, showed that variation in saturated conductivity has the largest impact on the groundwater levels simulated. Validation with observed groundwater heads showed that groundwater heads are reasonably well simulated for many regions of the world, especially for sediment basins (R2 = 0.95). The simulated regional-scale groundwater patterns and flow paths demonstrate the relevance of lateral groundwater flow in GHMs. Inter-basin groundwater flows can be a significant part of a basin's water budget and help to sustain river baseflows, especially during droughts. Also, water availability of larger aquifer systems can be positively affected by additional recharge from inter-basin groundwater flows.
Using Scaling to Understand, Model and Predict Global Scale Anthropogenic and Natural Climate Change
Lovejoy, S.; del Rio Amador, L.
2014-12-01
The atmosphere is variable over twenty orders of magnitude in time (≈10-3 to 1017 s) and almost all of the variance is in the spectral "background" which we show can be divided into five scaling regimes: weather, macroweather, climate, macroclimate and megaclimate. We illustrate this with instrumental and paleo data. Based the signs of the fluctuation exponent H, we argue that while the weather is "what you get" (H>0: fluctuations increasing with scale), that it is macroweather (Hdecreasing with scale) - not climate - "that you expect". The conventional framework that treats the background as close to white noise and focuses on quasi-periodic variability assumes a spectrum that is in error by a factor of a quadrillion (≈ 1015). Using this scaling framework, we can quantify the natural variability, distinguish it from anthropogenic variability, test various statistical hypotheses and make stochastic climate forecasts. For example, we estimate the probability that the warming is simply a giant century long natural fluctuation is less than 1%, most likely less than 0.1% and estimate return periods for natural warming events of different strengths and durations, including the slow down ("pause") in the warming since 1998. The return period for the pause was found to be 20-50 years i.e. not very unusual; however it immediately follows a 6 year "pre-pause" warming event of almost the same magnitude with a similar return period (30 - 40 years). To improve on these unconditional estimates, we can use scaling models to exploit the long range memory of the climate process to make accurate stochastic forecasts of the climate including the pause. We illustrate stochastic forecasts on monthly and annual scale series of global and northern hemisphere surface temperatures. We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow: for example, using hindcasts we find that at 10 year forecast horizons we can still explain ≈ 15% of the
Modeling field scale unsaturated flow and transport processes
International Nuclear Information System (INIS)
Gelhar, L.W.; Celia, M.A.; McLaughlin, D.
1994-08-01
The scales of concern in subsurface transport of contaminants from low-level radioactive waste disposal facilities are in the range of 1 to 1,000 m. Natural geologic materials generally show very substantial spatial variability in hydraulic properties over this range of scales. Such heterogeneity can significantly influence the migration of contaminants. It is also envisioned that complex earth structures will be constructed to isolate the waste and minimize infiltration of water into the facility. The flow of water and gases through such facilities must also be a concern. A stochastic theory describing unsaturated flow and contamination transport in naturally heterogeneous soils has been enhanced by adopting a more realistic characterization of soil variability. The enhanced theory is used to predict field-scale effective properties and variances of tension and moisture content. Applications illustrate the important effects of small-scale heterogeneity on large-scale anisotropy and hysteresis and demonstrate the feasibility of simulating two-dimensional flow systems at time and space scales of interest in radioactive waste disposal investigations. Numerical algorithms for predicting field scale unsaturated flow and contaminant transport have been improved by requiring them to respect fundamental physical principles such as mass conservation. These algorithms are able to provide realistic simulations of systems with very dry initial conditions and high degrees of heterogeneity. Numerical simulation of the movement of water and air in unsaturated soils has demonstrated the importance of air pathways for contaminant transport. The stochastic flow and transport theory has been used to develop a systematic approach to performance assessment and site characterization. Hypothesis-testing techniques have been used to determine whether model predictions are consistent with observed data
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
Towards modeling intergranular stress corrosion cracks on grain size scales
International Nuclear Information System (INIS)
Simonovski, Igor; Cizelj, Leon
2012-01-01
Highlights: ► Simulating the onset and propagation of intergranular cracking. ► Model based on the as-measured geometry and crystallographic orientations. ► Feasibility, performance of the proposed computational approach demonstrated. - Abstract: Development of advanced models at the grain size scales has so far been mostly limited to simulated geometry structures such as for example 3D Voronoi tessellations. The difficulty came from a lack of non-destructive techniques for measuring the microstructures. In this work a novel grain-size scale approach for modelling intergranular stress corrosion cracking based on as-measured 3D grain structure of a 400 μm stainless steel wire is presented. Grain topologies and crystallographic orientations are obtained using a diffraction contrast tomography, reconstructed within a detailed finite element model and coupled with advanced constitutive models for grains and grain boundaries. The wire is composed of 362 grains and over 1600 grain boundaries. Grain boundary damage initialization and early development is then explored for a number of cases, ranging from isotropic elasticity up to crystal plasticity constitutive laws for the bulk grain material. In all cases the grain boundaries are modeled using the cohesive zone approach. The feasibility of the approach is explored.
Multi-scale modeling of the CD8 immune response
Energy Technology Data Exchange (ETDEWEB)
Barbarroux, Loic, E-mail: loic.barbarroux@doctorant.ec-lyon.fr [Inria, Université de Lyon, UMR 5208, Institut Camille Jordan (France); Ecole Centrale de Lyon, 36 avenue Guy de Collongue, 69134 Ecully (France); Michel, Philippe, E-mail: philippe.michel@ec-lyon.fr [Inria, Université de Lyon, UMR 5208, Institut Camille Jordan (France); Ecole Centrale de Lyon, 36 avenue Guy de Collongue, 69134 Ecully (France); Adimy, Mostafa, E-mail: mostafa.adimy@inria.fr [Inria, Université de Lyon, UMR 5208, Université Lyon 1, Institut Camille Jordan, 43 Bd. du 11 novembre 1918, F-69200 Villeurbanne Cedex (France); Crauste, Fabien, E-mail: crauste@math.univ-lyon1.fr [Inria, Université de Lyon, UMR 5208, Université Lyon 1, Institut Camille Jordan, 43 Bd. du 11 novembre 1918, F-69200 Villeurbanne Cedex (France)
2016-06-08
During the primary CD8 T-Cell immune response to an intracellular pathogen, CD8 T-Cells undergo exponential proliferation and continuous differentiation, acquiring cytotoxic capabilities to address the infection and memorize the corresponding antigen. After cleaning the organism, the only CD8 T-Cells left are antigen-specific memory cells whose role is to respond stronger and faster in case they are presented this very same antigen again. That is how vaccines work: a small quantity of a weakened pathogen is introduced in the organism to trigger the primary response, generating corresponding memory cells in the process, giving the organism a way to defend himself in case it encounters the same pathogen again. To investigate this process, we propose a non linear, multi-scale mathematical model of the CD8 T-Cells immune response due to vaccination using a maturity structured partial differential equation. At the intracellular scale, the level of expression of key proteins is modeled by a delay differential equation system, which gives the speeds of maturation for each cell. The population of cells is modeled by a maturity structured equation whose speeds are given by the intracellular model. We focus here on building the model, as well as its asymptotic study. Finally, we display numerical simulations showing the model can reproduce the biological dynamics of the cell population for both the primary response and the secondary responses.
Site-scale groundwater flow modelling of Aberg
Energy Technology Data Exchange (ETDEWEB)
Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)
1998-12-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method`s self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the
Site-scale groundwater flow modelling of Aberg
International Nuclear Information System (INIS)
Walker, D.; Gylling, B.
1998-12-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method's self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the
Large scale hydro-economic modelling for policy support
de Roo, Ad; Burek, Peter; Bouraoui, Faycal; Reynaud, Arnaud; Udias, Angel; Pistocchi, Alberto; Lanzanova, Denis; Trichakis, Ioannis; Beck, Hylke; Bernhard, Jeroen
2014-05-01
To support European Union water policy making and policy monitoring, a hydro-economic modelling environment has been developed to assess optimum combinations of water retention measures, water savings measures, and nutrient reduction measures for continental Europe. This modelling environment consists of linking the agricultural CAPRI model, the LUMP land use model, the LISFLOOD water quantity model, the EPIC water quality model, the LISQUAL combined water quantity, quality and hydro-economic model, and a multi-criteria optimisation routine. With this modelling environment, river basin scale simulations are carried out to assess the effects of water-retention measures, water-saving measures, and nutrient-reduction measures on several hydro-chemical indicators, such as the Water Exploitation Index (WEI), Nitrate and Phosphate concentrations in rivers, the 50-year return period river discharge as an indicator for flooding, and economic losses due to water scarcity for the agricultural sector, the manufacturing-industry sector, the energy-production sector and the domestic sector, as well as the economic loss due to flood damage. Recently, this model environment is being extended with a groundwater model to evaluate the effects of measures on the average groundwater table and available resources. Also, water allocation rules are addressed, while having environmental flow included as a minimum requirement for the environment. Economic functions are currently being updated as well. Recent development and examples will be shown and discussed, as well as open challenges.
Modeling and simulation in tribology across scales: An overview
DEFF Research Database (Denmark)
Vakis, A.I.; Yastrebov, V.A.; Scheibert, J.
2018-01-01
This review summarizes recent advances in the area of tribology based on the outcome of a Lorentz Center workshop surveying various physical, chemical and mechanical phenomena across scales. Among the main themes discussed were those of rough surface representations, the breakdown of continuum...... nonlinear effects of plasticity, adhesion, friction, wear, lubrication and surface chemistry in tribological models. For each topic, we propose some research directions....
Phenomenological aspects of no-scale inflation models
Energy Technology Data Exchange (ETDEWEB)
Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics, King' s College London, WC2R 2LS London (United Kingdom); Garcia, Marcos A.G.; Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V., E-mail: john.ellis@cern.ch, E-mail: garciagarcia@physics.umn.edu, E-mail: dimitri@physics.tamu.edu, E-mail: olive@physics.umn.edu [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, 77843 Texas (United States)
2015-10-01
We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0} = B{sub 0} = A{sub 0} = 0, of the CMSSM type with universal A{sub 0} and m{sub 0} ≠ 0 at a high scale, and of the mSUGRA type with A{sub 0} = B{sub 0} + m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2} ≠ 0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.
Perturbation theory instead of large scale shell model calculations
International Nuclear Information System (INIS)
Feldmeier, H.; Mankos, P.
1977-01-01
Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de
Next-generation genome-scale models for metabolic engineering
DEFF Research Database (Denmark)
King, Zachary A.; Lloyd, Colton J.; Feist, Adam M.
2015-01-01
Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict...... examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering....
Scaling theory of depinning in the Sneppen model
International Nuclear Information System (INIS)
Maslov, S.; Paczuski, M.
1994-01-01
We develop a scaling theory for the critical depinning behavior of the Sneppen interface model [Phys. Rev. Lett. 69, 3539 (1992)]. This theory is based on a ''gap'' equation that describes the self-organization process to a critical state of the depinning transition. All of the critical exponents can be expressed in terms of two independent exponents, ν parallel (d) and ν perpendicular (d), characterizing the divergence of the parallel and perpendicular correlation lengths as the interface approaches its dynamical attractor
Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing
Nance, Donald; Liever, Peter; Nielsen, Tanner
2015-01-01
The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.
Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing
Nance, Donald K.; Liever, Peter A.
2015-01-01
The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.
Atmospheric dispersion modelling over complex terrain at small scale
Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.
2014-03-01
Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.
Disinformative data in large-scale hydrological modelling
Directory of Open Access Journals (Sweden)
A. Kauffeldt
2013-07-01
Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent
Two-scale modelling for hydro-mechanical damage
International Nuclear Information System (INIS)
Frey, J.; Chambon, R.; Dascalu, C.
2010-01-01
Document available in extended abstract form only. Excavation works for underground storage create a damage zone for the rock nearby and affect its hydraulics properties. This degradation, already observed by laboratory tests, can create a leading path for fluids. The micro fracture phenomenon, which occur at a smaller scale and affect the rock permeability, must be fully understood to minimize the transfer process. Many methods can be used in order to take into account the microstructure of heterogeneous materials. Among them a method has been developed recently. Instead of using a constitutive equation obtained by phenomenological considerations or by some homogenization techniques, the representative elementary volume (R.E.V.) is modelled as a structure and the links between a prescribed kinematics and the corresponding dual forces are deduced numerically. This yields the so called Finite Element square method (FE2). In a numerical point of view, a finite element model is used at the macroscopic level, and for each Gauss point, computations on the microstructure gives the usual results of a constitutive law. This numerical approach is now classical in order to properly model some materials such as composites and the efficiency of such numerical homogenization process has been shown, and allows numerical modelling of deformation processes associated with various micro-structural changes. The aim of this work is to describe trough such a method, damage of the rock with a two scale hydro-mechanical model. The rock damage at the macroscopic scale is directly link with an analysis on the microstructure. At the macroscopic scale a two phase's problem is studied. A solid skeleton is filled up by a filtrating fluid. It is necessary to enforce two balance equation and two mass conservation equations. A classical way to deal with such a problem is to work with the balance equation of the whole mixture, and the mass fluid conservation written in a weak form, the mass
Model Predictive Control for a Small Scale Unmanned Helicopter
Directory of Open Access Journals (Sweden)
Jianfu Du
2008-11-01
Full Text Available Kinematical and dynamical equations of a small scale unmanned helicoper are presented in the paper. Based on these equations a model predictive control (MPC method is proposed for controlling the helicopter. This novel method allows the direct accounting for the existing time delays which are used to model the dynamics of actuators and aerodynamics of the main rotor. Also the limits of the actuators are taken into the considerations during the controller design. The proposed control algorithm was verified in real flight experiments where good perfomance was shown in postion control mode.
Multi-scale modeling of ductile failure in metallic alloys
International Nuclear Information System (INIS)
Pardoen, Th.; Scheyvaerts, F.; Simar, A.; Tekoglu, C.; Onck, P.R.
2010-01-01
Micro-mechanical models for ductile failure have been developed in the seventies and eighties essentially to address cracking in structural applications and complement the fracture mechanics approach. Later, this approach has become attractive for physical metallurgists interested by the prediction of failure during forming operations and as a guide for the design of more ductile and/or high-toughness microstructures. Nowadays, a realistic treatment of damage evolution in complex metallic microstructures is becoming feasible when sufficiently sophisticated constitutive laws are used within the context of a multilevel modelling strategy. The current understanding and the state of the art models for the nucleation, growth and coalescence of voids are reviewed with a focus on the underlying physics. Considerations are made about the introduction of the different length scales associated with the microstructure and damage process. Two applications of the methodology are then described to illustrate the potential of the current models. The first application concerns the competition between intergranular and transgranular ductile fracture in aluminum alloys involving soft precipitate free zones along the grain boundaries. The second application concerns the modeling of ductile failure in friction stir welded joints, a problem which also involves soft and hard zones, albeit at a larger scale. (authors)
Finite element modeling of multilayered structures of fish scales.
Chandler, Mei Qiang; Allison, Paul G; Rodriguez, Rogie I; Moser, Robert D; Kennedy, Alan J
2014-12-01
The interlinked fish scales of Atractosteus spatula (alligator gar) and Polypterus senegalus (gray and albino bichir) are effective multilayered armor systems for protecting fish from threats such as aggressive conspecific interactions or predation. Both types of fish scales have multi-layered structures with a harder and stiffer outer layer, and softer and more compliant inner layers. However, there are differences in relative layer thickness, property mismatch between layers, the property gradations and nanostructures in each layer. The fracture paths and patterns of both scales under microindentation loads were different. In this work, finite element models of fish scales of A. spatula and P. senegalus were built to investigate the mechanics of their multi-layered structures under penetration loads. The models simulate a rigid microindenter penetrating the fish scales quasi-statically to understand the observed experimental results. Study results indicate that the different fracture patterns and crack paths observed in the experiments were related to the different stress fields caused by the differences in layer thickness, and spatial distribution of the elastic and plastic properties in the layers, and the differences in interface properties. The parametric studies and experimental results suggest that smaller fish such as P. senegalus may have adopted a thinner outer layer for light-weighting and improved mobility, and meanwhile adopted higher strength and higher modulus at the outer layer, and stronger interface properties to prevent ring cracking and interface cracking, and larger fish such as A. spatula and Arapaima gigas have lower strength and lower modulus at the outer layers and weaker interface properties, but have adopted thicker outer layers to provide adequate protection against ring cracking and interface cracking, possibly because weight is less of a concern relative to the smaller fish such as P. senegalus. Published by Elsevier Ltd.
Scaling behavior of an airplane-boarding model
Brics, Martins; Kaupužs, Jevgenijs; Mahnke, Reinhard
2013-04-01
An airplane-boarding model, introduced earlier by Frette and Hemmer [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.85.011130 85, 011130 (2012)], is studied with the aim of determining precisely its asymptotic power-law scaling behavior for a large number of passengers N. Based on Monte Carlo simulation data for very large system sizes up to N=216=65536, we have analyzed numerically the scaling behavior of the mean boarding time and other related quantities. In analogy with critical phenomena, we have used appropriate scaling Ansätze, which include the leading term as some power of N (e.g., ∝Nα for ), as well as power-law corrections to scaling. Our results clearly show that α=1/2 holds with a very high numerical accuracy (α=0.5001±0.0001). This value deviates essentially from α≃0.69, obtained earlier by Frette and Hemmer from data within the range 2≤N≤16. Our results confirm the convergence of the effective exponent αeff(N) to 1/2 at large N as observed by Bernstein. Our analysis explains this effect. Namely, the effective exponent αeff(N) varies from values about 0.7 for small system sizes to the true asymptotic value 1/2 at N→∞ almost linearly in N-1/3 for large N. This means that the variation is caused by corrections to scaling, the leading correction-to-scaling exponent being θ≈1/3. We have estimated also other exponents: ν=1/2 for the mean number of passengers taking seats simultaneously in one time step, β=1 for the second moment of tb, and γ≈1/3 for its variance.
Scaling behavior of an airplane-boarding model.
Brics, Martins; Kaupužs, Jevgenijs; Mahnke, Reinhard
2013-04-01
An airplane-boarding model, introduced earlier by Frette and Hemmer [Phys. Rev. E 85, 011130 (2012)], is studied with the aim of determining precisely its asymptotic power-law scaling behavior for a large number of passengers N. Based on Monte Carlo simulation data for very large system sizes up to N=2(16)=65536, we have analyzed numerically the scaling behavior of the mean boarding time and other related quantities. In analogy with critical phenomena, we have used appropriate scaling Ansätze, which include the leading term as some power of N (e.g., [proportionality]N(α) for ), as well as power-law corrections to scaling. Our results clearly show that α=1/2 holds with a very high numerical accuracy (α=0.5001±0.0001). This value deviates essentially from α=/~0.69, obtained earlier by Frette and Hemmer from data within the range 2≤N≤16. Our results confirm the convergence of the effective exponent α(eff)(N) to 1/2 at large N as observed by Bernstein. Our analysis explains this effect. Namely, the effective exponent α(eff)(N) varies from values about 0.7 for small system sizes to the true asymptotic value 1/2 at N→∞ almost linearly in N(-1/3) for large N. This means that the variation is caused by corrections to scaling, the leading correction-to-scaling exponent being θ≈1/3. We have estimated also other exponents: ν=1/2 for the mean number of passengers taking seats simultaneously in one time step, β=1 for the second moment of t(b), and γ≈1/3 for its variance.
Modeling and Simulation of a lab-scale Fluidised Bed
Directory of Open Access Journals (Sweden)
Britt Halvorsen
2002-04-01
Full Text Available The flow behaviour of a lab-scale fluidised bed with a central jet has been simulated. The study has been performed with an in-house computational fluid dynamics (CFD model named FLOTRACS-MP-3D. The CFD model is based on a multi-fluid Eulerian description of the phases, where the kinetic theory for granular flow forms the basis for turbulence modelling of the solid phases. A two-dimensional Cartesian co-ordinate system is used to describe the geometry. This paper discusses whether bubble formation and bed height are influenced by coefficient of restitution, drag model and number of solid phases. Measurements of the same fluidised bed with a digital video camera are performed. Computational results are compared with the experimental results, and the discrepancies are discussed.
Towards a 'standard model' of large scale structure formation
International Nuclear Information System (INIS)
Shafi, Q.
1994-01-01
We explore constraints on inflationary models employing data on large scale structure mainly from COBE temperature anisotropies and IRAS selected galaxy surveys. In models where the tensor contribution to the COBE signal is negligible, we find that the spectral index of density fluctuations n must exceed 0.7. Furthermore the COBE signal cannot be dominated by the tensor component, implying n > 0.85 in such models. The data favors cold plus hot dark matter models with n equal or close to unity and Ω HDM ∼ 0.2 - 0.35. Realistic grand unified theories, including supersymmetric versions, which produce inflation with these properties are presented. (author). 46 refs, 8 figs
Censored rainfall modelling for estimation of fine-scale extremes
Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro
2018-01-01
Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.
Effects of input uncertainty on cross-scale crop modeling
Waha, Katharina; Huth, Neil; Carberry, Peter
2014-05-01
The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input
Probabilistic flood damage modelling at the meso-scale
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2014-05-01
Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.
A hybrid plume model for local-scale dispersion
Energy Technology Data Exchange (ETDEWEB)
Nikmo, J.; Tuovinen, J.P.; Kukkonen, J.; Valkama, I.
1997-12-31
The report describes the contribution of the Finnish Meteorological Institute to the project `Dispersion from Strongly Buoyant Sources`, under the `Environment` programme of the European Union. The project addresses the atmospheric dispersion of gases and particles emitted from typical fires in warehouses and chemical stores. In the study only the `passive plume` regime, in which the influence of plume buoyancy is no longer important, is addressed. The mathematical model developed and its numerical testing is discussed. The model is based on atmospheric boundary-layer scaling theory. In the vicinity of the source, Gaussian equations are used in both the horizontal and vertical directions. After a specified transition distance, gradient transfer theory is applied in the vertical direction, while the horizontal dispersion is still assumed to be Gaussian. The dispersion parameters and eddy diffusivity are modelled in a form which facilitates the use of a meteorological pre-processor. Also a new model for the vertical eddy diffusivity (K{sub z}), which is a continuous function of height in the various atmospheric scaling regions is presented. The model includes a treatment of the dry deposition of gases and particulate matter, but wet deposition has been neglected. A numerical solver for the atmospheric diffusion equation (ADE) has been developed. The accuracy of the numerical model was analysed by comparing the model predictions with two analytical solutions of ADE. The numerical deviations of the model predictions from these analytic solutions were less than two per cent for the computational regime. The report gives numerical results for the vertical profiles of the eddy diffusivity and the dispersion parameters, and shows spatial concentration distributions in various atmospheric conditions 39 refs.
Modelling soil erosion at European scale: towards harmonization and reproducibility
Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.
2015-02-01
Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale, because a systematic knowledge of local climatological and soil parameters is often unavailable. A new approach for modelling soil erosion at regional scale is here proposed. It is based on the joint use of low-data-demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available data sets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country-level statistics of pre-existing European soil erosion maps is also provided.
Spatial modeling of agricultural land use change at global scale
Meiyappan, P.; Dalton, M.; O'Neill, B. C.; Jain, A. K.
2014-11-01
Long-term modeling of agricultural land use is central in global scale assessments of climate change, food security, biodiversity, and climate adaptation and mitigation policies. We present a global-scale dynamic land use allocation model and show that it can reproduce the broad spatial features of the past 100 years of evolution of cropland and pastureland patterns. The modeling approach integrates economic theory, observed land use history, and data on both socioeconomic and biophysical determinants of land use change, and estimates relationships using long-term historical data, thereby making it suitable for long-term projections. The underlying economic motivation is maximization of expected profits by hypothesized landowners within each grid cell. The model predicts fractional land use for cropland and pastureland within each grid cell based on socioeconomic and biophysical driving factors that change with time. The model explicitly incorporates the following key features: (1) land use competition, (2) spatial heterogeneity in the nature of driving factors across geographic regions, (3) spatial heterogeneity in the relative importance of driving factors and previous land use patterns in determining land use allocation, and (4) spatial and temporal autocorrelation in land use patterns. We show that land use allocation approaches based solely on previous land use history (but disregarding the impact of driving factors), or those accounting for both land use history and driving factors by mechanistically fitting models for the spatial processes of land use change do not reproduce well long-term historical land use patterns. With an example application to the terrestrial carbon cycle, we show that such inaccuracies in land use allocation can translate into significant implications for global environmental assessments. The modeling approach and its evaluation provide an example that can be useful to the land use, Integrated Assessment, and the Earth system modeling
Uncertainty Quantification in Scale-Dependent Models of Flow in Porous Media: SCALE-DEPENDENT UQ
Energy Technology Data Exchange (ETDEWEB)
Tartakovsky, A. M. [Computational Mathematics Group, Pacific Northwest National Laboratory, Richland WA USA; Panzeri, M. [Dipartimento di Ingegneria Civile e Ambientale, Politecnico di Milano, Milano Italy; Tartakovsky, G. D. [Hydrology Group, Pacific Northwest National Laboratory, Richland WA USA; Guadagnini, A. [Dipartimento di Ingegneria Civile e Ambientale, Politecnico di Milano, Milano Italy
2017-11-01
Equations governing flow and transport in heterogeneous porous media are scale-dependent. We demonstrate that it is possible to identify a support scale $\\eta^*$, such that the typically employed approximate formulations of Moment Equations (ME) yield accurate (statistical) moments of a target environmental state variable. Under these circumstances, the ME approach can be used as an alternative to the Monte Carlo (MC) method for Uncertainty Quantification in diverse fields of Earth and environmental sciences. MEs are directly satisfied by the leading moments of the quantities of interest and are defined on the same support scale as the governing stochastic partial differential equations (PDEs). Computable approximations of the otherwise exact MEs can be obtained through perturbation expansion of moments of the state variables in orders of the standard deviation of the random model parameters. As such, their convergence is guaranteed only for the standard deviation smaller than one. We demonstrate our approach in the context of steady-state groundwater flow in a porous medium with a spatially random hydraulic conductivity.
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Tacit knowledge in academia: a proposed model and measurement scale.
Leonard, Nancy; Insch, Gary S
2005-11-01
The authors propose a multidimensional model of tacit knowledge and develop a measure of tacit knowledge in academia. They discuss the theory and extant literature on tacit knowledge and propose a 6-factor model. Experiment 1 is a replication of a recent study of academic tacit knowledge using the scale developed and administered at an Israeli university (A. Somech & R. Bogler, 1999). The results of the replication differed from those found in the original study. For Experiment 2, the authors developed a domain-specific measure of academic tacit knowledge, the Academic Tacit Knowledge Scale (ATKS), and used this measure to explore the multidimensionality of tacit knowledge proposed in the model. The results of an exploratory factor analysis (n=142) followed by a confirmatory factor analysis (n=286) are reported. The sample for both experiments was 428 undergraduate students enrolled at a large public university in the eastern United States. Results indicated that a 5-factor model of academic tacit knowledge provided a strong fit for the data.
Multi-scale modeling of carbon capture systems
Energy Technology Data Exchange (ETDEWEB)
Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-03
The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO_{2} capture. The sorbent model includes a detailed treatment of transport and amine-CO_{2}- H_{2}O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.
Scaling of coercivity in a 3d random anisotropy model
Energy Technology Data Exchange (ETDEWEB)
Proctor, T.C., E-mail: proctortc@gmail.com; Chudnovsky, E.M., E-mail: EUGENE.CHUDNOVSKY@lehman.cuny.edu; Garanin, D.A.
2015-06-15
The random-anisotropy Heisenberg model is numerically studied on lattices containing over ten million spins. The study is focused on hysteresis and metastability due to topological defects, and is relevant to magnetic properties of amorphous and sintered magnets. We are interested in the limit when ferromagnetic correlations extend beyond the size of the grain inside which the magnetic anisotropy axes are correlated. In that limit the coercive field computed numerically roughly scales as the fourth power of the random anisotropy strength and as the sixth power of the grain size. Theoretical arguments are presented that provide an explanation of numerical results. Our findings should be helpful for designing amorphous and nanosintered materials with desired magnetic properties. - Highlights: • We study the random-anisotropy model on lattices containing up to ten million spins. • Irreversible behavior due to topological defects (hedgehogs) is elucidated. • Hysteresis loop area scales as the fourth power of the random anisotropy strength. • In nanosintered magnets the coercivity scales as the six power of the grain size.
A model for AGN variability on multiple time-scales
Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.
2018-05-01
We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.
European Continental Scale Hydrological Model, Limitations and Challenges
Rouholahnejad, E.; Abbaspour, K.
2014-12-01
The pressures on water resources due to increasing levels of societal demand, increasing conflict of interest and uncertainties with regard to freshwater availability create challenges for water managers and policymakers in many parts of Europe. At the same time, climate change adds a new level of pressure and uncertainty with regard to freshwater supplies. On the other hand, the small-scale sectoral structure of water management is now reaching its limits. The integrated management of water in basins requires a new level of consideration where water bodies are to be viewed in the context of the whole river system and managed as a unit within their basins. In this research we present the limitations and challenges of modelling the hydrology of the continent Europe. The challenges include: data availability at continental scale and the use of globally available data, streamgauge data quality and their misleading impacts on model calibration, calibration of large-scale distributed model, uncertainty quantification, and computation time. We describe how to avoid over parameterization in calibration process and introduce a parallel processing scheme to overcome high computation time. We used Soil and Water Assessment Tool (SWAT) program as an integrated hydrology and crop growth simulator to model water resources of the Europe continent. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals for the period of 1970-2006. The use of a large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation and provides the overall picture of water resources temporal and spatial distribution across the continent. The calibrated model and results provide information support to the European Water
Islands Climatology at Local Scale. Downscaling with CIELO model
Azevedo, Eduardo; Reis, Francisco; Tomé, Ricardo; Rodrigues, Conceição
2016-04-01
Islands with horizontal scales of the order of tens of km, as is the case of the Atlantic Islands of Macaronesia, are subscale orographic features for Global Climate Models (GCMs) since the horizontal scales of these models are too coarse to give a detailed representation of the islands' topography. Even the Regional Climate Models (RCMs) reveals limitations when they are forced to reproduce the climate of small islands mainly by the way they flat and lowers the elevation of the islands, reducing the capacity of the model to reproduce important local mechanisms that lead to a very deep local climate differentiation. Important local thermodynamics mechanisms like Foehn effect, or the influence of topography on radiation balance, have a prominent role in the climatic spatial differentiation. Advective transport of air - and the consequent induced adiabatic cooling due to orography - lead to transformations of the state parameters of the air that leads to the spatial configuration of the fields of pressure, temperature and humidity. The same mechanism is in the origin of the orographic clouds cover that, besides the direct role as water source by the reinforcement of precipitation, act like a filter to direct solar radiation and as a source of long-wave radiation that affect the local balance of energy. Also, the saturation (or near saturation) conditions that they provide constitute a barrier to water vapour diffusion in the mechanisms of evapotranspiration. Topographic factors like slope, aspect and orographic mask have also significant importance in the local energy balance. Therefore, the simulation of the local scale climate (past, present and future) in these archipelagos requires the use of downscaling techniques to adjust locally outputs obtained at upper scales. This presentation will discuss and analyse the evolution of the CIELO model (acronym for Clima Insular à Escala LOcal) a statistical/dynamical technique developed at the University of the Azores
From micro-scale 3D simulations to macro-scale model of periodic porous media
Crevacore, Eleonora; Tosco, Tiziana; Marchisio, Daniele; Sethi, Rajandrea; Messina, Francesca
2015-04-01
In environmental engineering, the transport of colloidal suspensions in porous media is studied to understand the fate of potentially harmful nano-particles and to design new remediation technologies. In this perspective, averaging techniques applied to micro-scale numerical simulations are a powerful tool to extrapolate accurate macro-scale models. Choosing two simplified packing configurations of soil grains and starting from a single elementary cell (module), it is possible to take advantage of the periodicity of the structures to reduce the computation costs of full 3D simulations. Steady-state flow simulations for incompressible fluid in laminar regime are implemented. Transport simulations are based on the pore-scale advection-diffusion equation, that can be enriched introducing also the Stokes velocity (to consider the gravity effect) and the interception mechanism. Simulations are carried on a domain composed of several elementary modules, that serve as control volumes in a finite volume method for the macro-scale method. The periodicity of the medium involves the periodicity of the flow field and this will be of great importance during the up-scaling procedure, allowing relevant simplifications. Micro-scale numerical data are treated in order to compute the mean concentration (volume and area averages) and fluxes on each module. The simulation results are used to compare the micro-scale averaged equation to the integral form of the macroscopic one, making a distinction between those terms that could be computed exactly and those for which a closure in needed. Of particular interest it is the investigation of the origin of macro-scale terms such as the dispersion and tortuosity, trying to describe them with micro-scale known quantities. Traditionally, to study the colloidal transport many simplifications are introduced, such those concerning ultra-simplified geometry that usually account for a single collector. Gradual removal of such hypothesis leads to a
Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...
On Two-Scale Modelling of Heat and Mass Transfer
International Nuclear Information System (INIS)
Vala, J.; Stastnik, S.
2008-01-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
On Two-Scale Modelling of Heat and Mass Transfer
Vala, J.; Št'astník, S.
2008-09-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
GA-4 half-scale cask model fabrication
International Nuclear Information System (INIS)
Meyer, R.J.
1995-01-01
Unique fabrication experience was gained during the construction of a half-scale model of the GA-4 Legal Weight Truck Cask. Techniques were developed for forming, welding, and machining XM-19 stainless steel. Noncircular 'rings' of depleted uranium were cast and machined to close tolerances. The noncircular cask body, gamma shield, and cavity liner were produced using a nonconventional approach in which components were first machined to final size and then welded together using a low-distortion electron beam process. Special processes were developed for fabricating the bonded aluminum honeycomb impact limiters. The innovative design of the cask internals required precision deep hole drilling, low-distortion welding, and close tolerance machining. Valuable lessons learned were documented for use in future manufacturing of full-scale prototype and production units
Iso-scaling in a microcanonical multifragmentation model
International Nuclear Information System (INIS)
Raduta, R.; Raduta, H.
2003-01-01
A microcanonical multifragmentation model is used to investigate iso-scaling over a broad range of excitation energies, for several values of freeze-out volume and equilibrated sources with masses between 40 and 200 in both primary and asymptotic stages of the decay. It was found that the values of the slope parameters α and β depend on the size and excitation energy of the source and are affected by the secondary decay of primary fragments. It was evidenced that iso-scaling is affected by finite size effects. The evolution of the differences of neutron and proton chemical potentials corresponding to two equilibrated nuclear sources having the same size and different isospin values with temperature and freeze-out volume is presented. (authors)
Light moduli in almost no-scale models
International Nuclear Information System (INIS)
Buchmueller, Wilfried; Moeller, Jan; Schmidt, Jonas
2009-09-01
We discuss the stabilization of the compact dimension for a class of five-dimensional orbifold supergravity models. Supersymmetry is broken by the superpotential on a boundary. Classically, the size L of the fifth dimension is undetermined, with or without supersymmetry breaking, and the effective potential is of no-scale type. The size L is fixed by quantum corrections to the Kaehler potential, the Casimir energy and Fayet-Iliopoulos (FI) terms localized at the boundaries. For an FI scale of order M GUT , as in heterotic string compactifications with anomalous U(1) symmetries, one obtains L∝1/M GUT . A small mass is predicted for the scalar fluctuation associated with the fifth dimension, m ρ 3/2 /(L M). (orig.)
Research on large-scale wind farm modeling
Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng
2017-01-01
Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.
Pore-scale modeling of phase change in porous media
Juanes, Ruben; Cueto-Felgueroso, Luis; Fu, Xiaojing
2017-11-01
One of the main open challenges in pore-scale modeling is the direct simulation of flows involving multicomponent mixtures with complex phase behavior. Reservoir fluid mixtures are often described through cubic equations of state, which makes diffuse interface, or phase field theories, particularly appealing as a modeling framework. What is still unclear is whether equation-of-state-driven diffuse-interface models can adequately describe processes where surface tension and wetting phenomena play an important role. Here we present a diffuse interface model of single-component, two-phase flow (a van der Waals fluid) in a porous medium under different wetting conditions. We propose a simplified Darcy-Korteweg model that is appropriate to describe flow in a Hele-Shaw cell or a micromodel, with a gap-averaged velocity. We study the ability of the diffuse-interface model to capture capillary pressure and the dynamics of vaporization/condensation fronts, and show that the model reproduces pressure fluctuations that emerge from abrupt interface displacements (Haines jumps) and from the break-up of wetting films.
A multi-scale adaptive model of residential energy demand
International Nuclear Information System (INIS)
Farzan, Farbod; Jafari, Mohsen A.; Gong, Jie; Farzan, Farnaz; Stryker, Andrew
2015-01-01
Highlights: • We extend an energy demand model to investigate changes in behavioral and usage patterns. • The model is capable of analyzing why demand behaves the way it does. • The model empowers decision makers to investigate DSM strategies and effectiveness. • The model provides means to measure the effect of energy prices on daily profile. • The model considers the coupling effects of adopting multiple new technologies. - Abstract: In this paper, we extend a previously developed bottom-up energy demand model such that the model can be used to determine changes in behavioral and energy usage patterns of a community when: (i) new load patterns from Plug-in Electrical Vehicles (PEV) or other devices are introduced; (ii) new technologies and smart devices are used within premises; and (iii) new Demand Side Management (DSM) strategies, such as price responsive demand are implemented. Unlike time series forecasting methods that solely rely on historical data, the model only uses a minimal amount of data at the atomic level for its basic constructs. These basic constructs can be integrated into a household unit or a community model using rules and connectors that are, in principle, flexible and can be altered according to the type of questions that need to be answered. Furthermore, the embedded dynamics of the model works on the basis of: (i) Markovian stochastic model for simulating human activities, (ii) Bayesian and logistic technology adoption models, and (iii) optimization, and rule-based models to respond to price signals without compromising users’ comfort. The proposed model is not intended to replace traditional forecasting models. Instead it provides an analytical framework that can be used at the design stage of new products and communities to evaluate design alternatives. The framework can also be used to answer questions such as why demand behaves the way it does by examining demands at different scales and by playing What-If games. These
Large Scale Computing for the Modelling of Whole Brain Connectivity
DEFF Research Database (Denmark)
Albers, Kristoffer Jon
organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...
The breaking of Bjorken scaling in the covariant parton model
International Nuclear Information System (INIS)
Polkinghorne, J.C.
1976-01-01
Scale breaking is investigated in terms of a covariant parton model formulation of deep inelastic processes. It is shown that a consistent theory requires that the convergence properties of parton-hadron amplitudes should be modified as well as the parton being given form factors. Purely logarithmic violation is possible and the resulting model has many features in common with asymtotically free gauge theories. Behaviour at large and small ω and fixed q 2 is investigated. γW 2 should increase with q 2 at large ω and decrease with q 2 at small ω. Heuristic arguments are also given which suggest that the model would only lead to logarithmic modifications of dimensional counting results in purely hadronic deep scattering. (Auth.)
Density Functional Theory and Materials Modeling at Atomistic Length Scales
Directory of Open Access Journals (Sweden)
Swapan K. Ghosh
2002-04-01
Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.
Modeling and simulation of large scale stirred tank
Neuville, John R.
The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the
Traffic assignment models in large-scale applications
DEFF Research Database (Denmark)
Rasmussen, Thomas Kjær
the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...
Environmental Impacts of Large Scale Biochar Application Through Spatial Modeling
Huber, I.; Archontoulis, S.
2017-12-01
In an effort to study the environmental (emissions, soil quality) and production (yield) impacts of biochar application at regional scales we coupled the APSIM-Biochar model with the pSIMS parallel platform. So far the majority of biochar research has been concentrated on lab to field studies to advance scientific knowledge. Regional scale assessments are highly needed to assist decision making. The overall objective of this simulation study was to identify areas in the USA that have the most gain environmentally from biochar's application, as well as areas which our model predicts a notable yield increase due to the addition of biochar. We present the modifications in both APSIM biochar and pSIMS components that were necessary to facilitate these large scale model runs across several regions in the United States at a resolution of 5 arcminutes. This study uses the AgMERRA global climate data set (1980-2010) and the Global Soil Dataset for Earth Systems modeling as a basis for creating its simulations, as well as local management operations for maize and soybean cropping systems and different biochar application rates. The regional scale simulation analysis is in progress. Preliminary results showed that the model predicts that high quality soils (particularly those common to Iowa cropping systems) do not receive much, if any, production benefit from biochar. However, soils with low soil organic matter ( 0.5%) do get a noteworthy yield increase of around 5-10% in the best cases. We also found N2O emissions to be spatial and temporal specific; increase in some areas and decrease in some other areas due to biochar application. In contrast, we found increases in soil organic carbon and plant available water in all soils (top 30 cm) due to biochar application. The magnitude of these increases (% change from the control) were larger in soil with low organic matter (below 1.5%) and smaller in soils with high organic matter (above 3%) and also dependent on biochar
Gomez, Rapson; Watson, Shaun D.
2017-01-01
For the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) together, this study examined support for a bifactor model, and also the internal consistency reliability and external validity of the factors in this model. Participants (N = 526) were adults from the general community who completed the SPS and SIAS. Confirmatory factor analysis (CFA) of their ratings indicated good support for the bifactor model. For this model, the loadings for all but six items were higher o...
Hydrogen combustion modelling in large-scale geometries
International Nuclear Information System (INIS)
Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.
2014-01-01
Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)
CHOI, S.; Shi, Y.; Ni, X.; Simard, M.; Myneni, R. B.
2013-12-01
Sparseness in in-situ observations has precluded the spatially explicit and accurate mapping of forest biomass. The need for large-scale maps has raised various approaches implementing conjugations between forest biomass and geospatial predictors such as climate, forest type, soil property, and topography. Despite the improved modeling techniques (e.g., machine learning and spatial statistics), a common limitation is that biophysical mechanisms governing tree growth are neglected in these black-box type models. The absence of a priori knowledge may lead to false interpretation of modeled results or unexplainable shifts in outputs due to the inconsistent training samples or study sites. Here, we present a gray-box approach combining known biophysical processes and geospatial predictors through parametric optimizations (inversion of reference measures). Total aboveground biomass in forest stands is estimated by incorporating the Forest Inventory and Analysis (FIA) and Parameter-elevation Regressions on Independent Slopes Model (PRISM). Two main premises of this research are: (a) The Allometric Scaling and Resource Limitations (ASRL) theory can provide a relationship between tree geometry and local resource availability constrained by environmental conditions; and (b) The zeroth order theory (size-frequency distribution) can expand individual tree allometry into total aboveground biomass at the forest stand level. In addition to the FIA estimates, two reference maps from the National Biomass and Carbon Dataset (NBCD) and U.S. Forest Service (USFS) were produced to evaluate the model. This research focuses on a site-scale test of the biomass model to explore the robustness of predictors, and to potentially improve models using additional geospatial predictors such as climatic variables, vegetation indices, soil properties, and lidar-/radar-derived altimetry products (or existing forest canopy height maps). As results, the optimized ASRL estimates satisfactorily
Evaluation of a distributed catchment scale water balance model
Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.
1993-01-01
The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.
Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham
2018-06-01
This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.
Energy Technology Data Exchange (ETDEWEB)
Vanoost, D., E-mail: dries.vanoost@kuleuven-kulak.be [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); Steentjes, S. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany); Peuteman, J. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Electrical Energy and Computer Architecture, Heverlee B-3001 (Belgium); Gielen, G. [KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); De Gersem, H. [KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); TU Darmstadt, Institut für Theorie Elektromagnetischer Felder, Darmstadt D-64289 (Germany); Pissoort, D. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); Hameyer, K. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany)
2016-09-15
This paper proposes a multi-scale energy-based material model for poly-crystalline materials. Describing the behaviour of poly-crystalline materials at three spatial scales of dominating physical mechanisms allows accounting for the heterogeneity and multi-axiality of the material behaviour. The three spatial scales are the poly-crystalline, grain and domain scale. Together with appropriate scale transitions rules and models for local magnetic behaviour at each scale, the model is able to describe the magneto-elastic behaviour (magnetostriction and hysteresis) at the macroscale, although the data input is merely based on a set of physical constants. Introducing a new energy density function that describes the demagnetisation field, the anhysteretic multi-scale energy-based material model is extended to the hysteretic case. The hysteresis behaviour is included at the domain scale according to the micro-magnetic domain theory while preserving a valid description for the magneto-elastic coupling. The model is verified using existing measurement data for different mechanical stress levels. - Highlights: • A ferromagnetic hysteretic energy-based multi-scale material model is proposed. • The hysteresis is obtained by new proposed hysteresis energy density function. • Avoids tedious parameter identification.
Device Scale Modeling of Solvent Absorption using MFIX-TFM
Energy Technology Data Exchange (ETDEWEB)
Carney, Janine E. [National Energy Technology Lab. (NETL), Albany, OR (United States); Finn, Justin R. [National Energy Technology Lab. (NETL), Albany, OR (United States); Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States)
2016-10-01
Recent climate change is largely attributed to greenhouse gases (e.g., carbon dioxide, methane) and fossil fuels account for a large majority of global CO_{2} emissions. That said, fossil fuels will continue to play a significant role in the generation of power for the foreseeable future. The extent to which CO_{2} is emitted needs to be reduced, however, carbon capture and sequestration are also necessary actions to tackle climate change. Different approaches exist for CO_{2} capture including both post-combustion and pre-combustion technologies, oxy-fuel combustion and/or chemical looping combustion. The focus of this effort is on post-combustion solvent-absorption technology. To apply CO_{2 technologies at commercial scale, the availability and maturity and the potential for scalability of that technology} need to be considered. Solvent absorption is a proven technology but not at the scale needed by typical power plant. The scale up and down and design of laboratory and commercial packed bed reactors depends heavily on the specific knowledge of two-phase pressure drop, liquid holdup, the wetting efficiency and mass transfer efficiency as a function of operating conditions. Simple scaling rules often fail to provide proper design. Conventional reactor design modeling approaches will generally characterize complex non-ideal flow and mixing patterns using simplified and/or mechanistic flow assumptions. While there are varying levels of complexity used within these approaches, none of these models resolve the local velocity fields. Consequently, they are unable to account for important design factors such as flow maldistribution and channeling from a fundamental perspective. Ideally design would be aided by development of predictive models based on truer representation of the physical and chemical processes that occur at different scales. Computational fluid dynamic (CFD) models are based on multidimensional flow equations with first
Modelling biological invasions: Individual to population scales at interfaces
Belmonte-Beitia, J.
2013-10-01
Extracting the population level behaviour of biological systems from that of the individual is critical in understanding dynamics across multiple scales and thus has been the subject of numerous investigations. Here, the influence of spatial heterogeneity in such contexts is explored for interfaces with a separation of the length scales characterising the individual and the interface, a situation that can arise in applications involving cellular modelling. As an illustrative example, we consider cell movement between white and grey matter in the brain which may be relevant in considering the invasive dynamics of glioma. We show that while one can safely neglect intrinsic noise, at least when considering glioma cell invasion, profound differences in population behaviours emerge in the presence of interfaces with only subtle alterations in the dynamics at the individual level. Transport driven by local cell sensing generates predictions of cell accumulations along interfaces where cell motility changes. This behaviour is not predicted with the commonly used Fickian diffusion transport model, but can be extracted from preliminary observations of specific cell lines in recent, novel, cryo-imaging. Consequently, these findings suggest a need to consider the impact of individual behaviour, spatial heterogeneity and especially interfaces in experimental and modelling frameworks of cellular dynamics, for instance in the characterisation of glioma cell motility. © 2013 Elsevier Ltd.
9 m side drop test of scale model
International Nuclear Information System (INIS)
Ku, Jeong-Hoe; Chung, Seong-Hwan; Lee, Ju-Chan; Seo, Ki-Seog
1993-01-01
A type B(U) shipping cask had been developed in KAERI for transporting PWR spent fuel. Since the cask is to transport spent PWR fuel, it must be designed to meet all of the structural requirements specified in domestic packaging regulations and IAEA safety series No.6. This paper describes the side drop testing of a one - third scale model cask. The crush and deformations of the shock absorbing covers directly control the deceleration experiences of the cask during the 9 m side drop impact. The shock absorbing covers greatly mitigated the inertia forces of the cask body due to the side drop impact. Compared with the side drop test and finite element analysis, it was verified that the 1/3 scale model cask maintain its structural integrity of the model cask under the side drop impact. The test and analysis results could be used as the basic data to evaluate the structural integrity of the real cask. (J.P.N.)
Modelling biological invasions: Individual to population scales at interfaces
Belmonte-Beitia, J.; Woolley, T.E.; Scott, J.G.; Maini, P.K.; Gaffney, E.A.
2013-01-01
Extracting the population level behaviour of biological systems from that of the individual is critical in understanding dynamics across multiple scales and thus has been the subject of numerous investigations. Here, the influence of spatial heterogeneity in such contexts is explored for interfaces with a separation of the length scales characterising the individual and the interface, a situation that can arise in applications involving cellular modelling. As an illustrative example, we consider cell movement between white and grey matter in the brain which may be relevant in considering the invasive dynamics of glioma. We show that while one can safely neglect intrinsic noise, at least when considering glioma cell invasion, profound differences in population behaviours emerge in the presence of interfaces with only subtle alterations in the dynamics at the individual level. Transport driven by local cell sensing generates predictions of cell accumulations along interfaces where cell motility changes. This behaviour is not predicted with the commonly used Fickian diffusion transport model, but can be extracted from preliminary observations of specific cell lines in recent, novel, cryo-imaging. Consequently, these findings suggest a need to consider the impact of individual behaviour, spatial heterogeneity and especially interfaces in experimental and modelling frameworks of cellular dynamics, for instance in the characterisation of glioma cell motility. © 2013 Elsevier Ltd.
Workshop on Human Activity at Scale in Earth System Models
Energy Technology Data Exchange (ETDEWEB)
Allen, Melissa R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Coletti, Mark A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kennedy, Joseph H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nair, Sujithkumar S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Omitaomu, Olufemi A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-01-26
Changing human activity within a geographical location may have significant influence on the global climate, but that activity must be parameterized in such a way as to allow these high-resolution sub-grid processes to affect global climate within that modeling framework. Additionally, we must have tools that provide decision support and inform local and regional policies regarding mitigation of and adaptation to climate change. The development of next-generation earth system models, that can produce actionable results with minimum uncertainties, depends on understanding global climate change and human activity interactions at policy implementation scales. Unfortunately, at best we currently have only limited schemes for relating high-resolution sectoral emissions to real-time weather, ultimately to become part of larger regions and well-mixed atmosphere. Moreover, even our understanding of meteorological processes at these scales is imperfect. This workshop addresses these shortcomings by providing a forum for discussion of what we know about these processes, what we can model, where we have gaps in these areas and how we can rise to the challenge to fill these gaps.
Challenges of Modeling Flood Risk at Large Scales
Guin, J.; Simic, M.; Rowe, J.
2009-04-01
Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing
Scale Adaptive Simulation Model for the Darrieus Wind Turbine
DEFF Research Database (Denmark)
Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.
2016-01-01
Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine...... the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads...
Enhanced learning through scale models and see-thru visualization
International Nuclear Information System (INIS)
Kelley, M.D.
1987-01-01
The development of PowerSafety International's See-Thru Power Plant has provided the nuclear industry with a bridge that can span the gap between the part-task simulator and the full-scope, high-fidelity plant simulator. The principle behind the See-Thru Power Plant is to provide the use of sensory experience in nuclear training programs. The See-Thru Power Plant is a scaled down, fully functioning model of a commercial nuclear power plant, equipped with a primary system, secondary system, and control console. The major components are constructed of glass, thus permitting visual conceptualization of a working nuclear power plant
LBM estimation of thermal conductivity in meso-scale modelling
International Nuclear Information System (INIS)
Grucelski, A
2016-01-01
Recently, there is a growing engineering interest in more rigorous prediction of effective transport coefficients for multicomponent, geometrically complex materials. We present main assumptions and constituents of the meso-scale model for the simulation of the coal or biomass devolatilisation with the Lattice Boltzmann method. For the results, the estimated values of the thermal conductivity coefficient of coal (solids), pyrolytic gases and air matrix are presented for a non-steady state with account for chemical reactions in fluid flow and heat transfer. (paper)
Large-scale modeling of rain fields from a rain cell deterministic model
FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia
2006-04-01
A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.
Photorealistic large-scale urban city model reconstruction.
Poullis, Charalambos; You, Suya
2009-01-01
The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).
International Nuclear Information System (INIS)
Huerta, M.; Lamoreaux, G.H.; Romesberg, L.E.; Yoshimura, H.R.; Joseph, B.J.; May, R.A.
1983-01-01
This report describes extensive full-scale and scale-model testing of 55-gallon drums used for shipping low-level radioactive waste materials. The tests conducted include static crush, single-can impact tests, and side impact tests of eight stacked drums. Static crush forces were measured and crush energies calculated. The tests were performed in full-, quarter-, and eighth-scale with different types of waste materials. The full-scale drums were modeled with standard food product cans. The response of the containers is reported in terms of drum deformations and lid behavior. The results of the scale model tests are correlated to the results of the full-scale drums. Two computer techniques for calculating the response of drum stacks are presented. 83 figures, 9 tables
Protein homology model refinement by large-scale energy optimization.
Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David
2018-03-20
Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.
A Dynamic Pore-Scale Model of Imbibition
DEFF Research Database (Denmark)
Mogensen, Kristian; Stenby, Erling Halfdan
1998-01-01
We present a dynamic pore-scale network model of imbibition, capable of calculating residual oil saturation for any given capillary number, viscosity ratio, contact angle and aspect ratio. Our goal is not to predict the outcome of core floods, but rather to perform a sensitivity analysis...... of the above-mentioned parameters, except the viscosity ratio. We find that contact angle, aspect ratio and capillary number all have a significant influence on the competition between piston-like advance, leading to high recovery, and snap-off, causing oil entrapment. Due to enormous CPU-time requirements we...... been entirely inhibited, in agreement with results obtained by Blunt using a quasi-static model. For higher aspect ratios, the effect of rate and contact angle is more pronounced. Many core floods are conducted at capillary numbers in the range 10 to10.6. We believe that the excellent recoveries...
Uncertainty Quantification for Large-Scale Ice Sheet Modeling
Energy Technology Data Exchange (ETDEWEB)
Ghattas, Omar [Univ. of Texas, Austin, TX (United States)
2016-02-05
This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.
Models for inflation with a low supersymmetry-breaking scale
International Nuclear Information System (INIS)
Binetruy, P.; California Univ., Santa Barbara; Mahajan, S.; California Univ., Berkeley
1986-01-01
We present models where the same scalar field is reponsible for inflation and for the breaking of supersymmetry. The scale of supersymmetry breaking is related to the slope of the potential in the plateau region described by the scalar field during the slow rollover, and the gravitino mass can therefore be kept as small as Msub(W), the mass of the weak gauge boson. We show that such a result is stable under radiative corrections. We describe the inflationary scenario corresponding to the simplest of these models and show that no major problem arises, except for a violation of the thermal constraint (stabilization of the field in the plateau region at high temperature). We discuss the possibility of introducing a second scalar field to satisfy this constraint. (orig.)
Regional Scale Modelling for Exploring Energy Strategies for Africa
International Nuclear Information System (INIS)
Welsch, M.
2015-01-01
KTH Royal Institute of Technology was founded in 1827 and it is the largest technical university in Sweden with five campuses and Around 15,000 students. KTH-dESA combines an outstanding knowledge in the field of energy systems analysis. This is demonstrated by the successful collaborations with many (UN) organizations. Regional Scale Modelling for Exploring Energy Strategies for Africa include Assessing renewable energy potentials; Analysing investment strategies; ) Assessing climate resilience; Comparing electrification options; Providing web-based decision support; and Quantifying energy access. It is conclude that Strategies required to ensure a robust and flexible energy system (-> no-regret choices); Capacity investments should be in line with national & regional strategies; Climate change important to consider, as it may strongly influence the energy flows in a region; Long-term models can help identify robust energy investment strategies and pathways that Can help assess future markets and profitability of individual projects
Scale modeling flow-induced vibrations of reactor components
International Nuclear Information System (INIS)
Mulcahy, T.M.
1982-06-01
Similitude relationships currently employed in the design of flow-induced vibration scale-model tests of nuclear reactor components are reviewed. Emphasis is given to understanding the origins of the similitude parameters as a basis for discussion of the inevitable distortions which occur in design verification testing of entire reactor systems and in feature testing of individual component designs for the existence of detrimental flow-induced vibration mechanisms. Distortions of similitude parameters made in current test practice are enumerated and selected example tests are described. Also, limitations in the use of specific distortions in model designs are evaluated based on the current understanding of flow-induced vibration mechanisms and structural response
Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.
2017-12-01
Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.
Multi-scale modelling for HEDP experiments on Orion
Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.
2016-05-01
The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.
Parameter study on dynamic behavior of ITER tokamak scaled model
International Nuclear Information System (INIS)
Nakahira, Masataka; Takeda, Nobukazu
2004-12-01
This report summarizes that the study on dynamic behavior of ITER tokamak scaled model according to the parametric analysis of base plate thickness, in order to find a reasonable solution to give the sufficient rigidity without affecting the dynamic behavior. For this purpose, modal analyses were performed changing the base plate thickness from the present design of 55 mm to 100 mm, 150 mm and 190 mm. Using these results, the modification plan of the plate thickness was studied. It was found that the thickness of 150 mm gives well fitting of 1st natural frequency about 90% of ideal rigid case. Thus, the modification study was performed to find out the adequate plate thickness. Considering the material availability, transportation and weldability, it was found that the 300mm thickness would be a limitation. The analysis result of 300mm thickness case showed 97% fitting of 1st natural frequency to the ideal rigid case. It was however found that the bolt length was too long and it gave additional twisting mode. As a result, it was concluded that the base plate thickness of 150mm or 190mm gives sufficient rigidity for the dynamic behavior of the scaled model. (author)
Physically representative atomistic modeling of atomic-scale friction
Dong, Yalin
Nanotribology is a research field to study friction, adhesion, wear and lubrication occurred between two sliding interfaces at nano scale. This study is motivated by the demanding need of miniaturization mechanical components in Micro Electro Mechanical Systems (MEMS), improvement of durability in magnetic storage system, and other industrial applications. Overcoming tribological failure and finding ways to control friction at small scale have become keys to commercialize MEMS with sliding components as well as to stimulate the technological innovation associated with the development of MEMS. In addition to the industrial applications, such research is also scientifically fascinating because it opens a door to understand macroscopic friction from the most bottom atomic level, and therefore serves as a bridge between science and engineering. This thesis focuses on solid/solid atomic friction and its associated energy dissipation through theoretical analysis, atomistic simulation, transition state theory, and close collaboration with experimentalists. Reduced-order models have many advantages for its simplification and capacity to simulating long-time event. We will apply Prandtl-Tomlinson models and their extensions to interpret dry atomic-scale friction. We begin with the fundamental equations and build on them step-by-step from the simple quasistatic one-spring, one-mass model for predicting transitions between friction regimes to the two-dimensional and multi-atom models for describing the effect of contact area. Theoretical analysis, numerical implementation, and predicted physical phenomena are all discussed. In the process, we demonstrate the significant potential for this approach to yield new fundamental understanding of atomic-scale friction. Atomistic modeling can never be overemphasized in the investigation of atomic friction, in which each single atom could play a significant role, but is hard to be captured experimentally. In atomic friction, the
Lithospheric-scale centrifuge models of pull-apart basins
Corti, Giacomo; Dooley, Tim P.
2015-11-01
We present here the results of the first lithospheric-scale centrifuge models of pull-apart basins. The experiments simulate relative displacement of two lithospheric blocks along two offset master faults, with the presence of a weak zone in the offset area localising deformation during strike-slip displacement. Reproducing the entire lithosphere-asthenosphere system provides boundary conditions that are more realistic than the horizontal detachment in traditional 1 g experiments and thus provide a better approximation of the dynamic evolution of natural pull-apart basins. Model results show that local extension in the pull-apart basins is accommodated through development of oblique-slip faulting at the basin margins and cross-basin faults obliquely cutting the rift depression. As observed in previous modelling studies, our centrifuge experiments suggest that the angle of offset between the master fault segments is one of the most important parameters controlling the architecture of pull-apart basins: the basins are lozenge shaped in the case of underlapping master faults, lazy-Z shaped in case of neutral offset and rhomboidal shaped for overlapping master faults. Model cross sections show significant along-strike variations in basin morphology, with transition from narrow V- and U-shaped grabens to a more symmetric, boxlike geometry passing from the basin terminations to the basin centre; a flip in the dominance of the sidewall faults from one end of the basin to the other is observed in all models. These geometries are also typical of 1 g models and characterise several pull-apart basins worldwide. Our models show that the complex faulting in the upper brittle layer corresponds at depth to strong thinning of the ductile layer in the weak zone; a rise of the base of the lithosphere occurs beneath the basin, and maximum lithospheric thinning roughly corresponds to the areas of maximum surface subsidence (i.e., the basin depocentre).
Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh
2011-01-01
Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...
A rate-dependent multi-scale crack model for concrete
Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.
2013-01-01
A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate
Global fits of GUT-scale SUSY models with GAMBIT
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.
Global fits of GUT-scale SUSY models with GAMBIT
Energy Technology Data Exchange (ETDEWEB)
Athron, Peter [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); H. Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, CNRS, ENS de Lyon, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); Theoretical Physics Department, CERN, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Camperdown, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration
2017-12-15
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos. (orig.)
Urban scale air quality modelling using detailed traffic emissions estimates
Borrego, C.; Amorim, J. H.; Tchepel, O.; Dias, D.; Rafael, S.; Sá, E.; Pimentel, C.; Fontes, T.; Fernandes, P.; Pereira, S. R.; Bandeira, J. M.; Coelho, M. C.
2016-04-01
The atmospheric dispersion of NOx and PM10 was simulated with a second generation Gaussian model over a medium-size south-European city. Microscopic traffic models calibrated with GPS data were used to derive typical driving cycles for each road link, while instantaneous emissions were estimated applying a combined Vehicle Specific Power/Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (VSP/EMEP) methodology. Site-specific background concentrations were estimated using time series analysis and a low-pass filter applied to local observations. Air quality modelling results are compared against measurements at two locations for a 1 week period. 78% of the results are within a factor of two of the observations for 1-h average concentrations, increasing to 94% for daily averages. Correlation significantly improves when background is added, with an average of 0.89 for the 24 h record. The results highlight the potential of detailed traffic and instantaneous exhaust emissions estimates, together with filtered urban background, to provide accurate input data to Gaussian models applied at the urban scale.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
Nawalany, Marek; Sinicyn, Grzegorz
2015-09-01
An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
Directory of Open Access Journals (Sweden)
Nawalany Marek
2015-09-01
Full Text Available An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i spatial extent and geometry of hydrogeological system, (ii spatial continuity and granularity of both natural and man-made objects within the system, (iii duration of the system and (iv continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale – scale of pores, meso-scale – scale of laboratory sample, macro-scale – scale of typical blocks in numerical models of groundwater flow, local-scale – scale of an aquifer/aquitard and regional-scale – scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here. Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Site-scale groundwater flow modelling of Ceberg
Energy Technology Data Exchange (ETDEWEB)
Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)
1999-06-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracturezones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of {epsilon}{sub f} 10{sup -4} and a flow-wetted surface area of a{sub r} = 0.1 m{sup 2}/(m{sup 3} rock): The median travel time is 1720 years. The median canister flux is 3.27x10{sup -5} m/year. The median F-ratio is 1.72x10{sup 6} years/m. The base case and the deterministic variant suggest that the variability of the travel times within
Site-scale groundwater flow modelling of Ceberg
International Nuclear Information System (INIS)
Walker, D.; Gylling, B.
1999-06-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracture zones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of ε f 10 -4 and a flow-wetted surface area of a r = 0.1 m 2 /(m 3 rock): The median travel time is 1720 years. The median canister flux is 3.27x10 -5 m/year. The median F-ratio is 1.72x10 6 years/m. The base case and the deterministic variant suggest that the variability of the travel times within individual realisations is due to the
Impact of Scattering Model on Disdrometer Derived Attenuation Scaling
Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo (Compiler)
2016-01-01
NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 GHz attenuation from the disdrometer and the 20 GHz timeseries as well as to directly measure the 40 GHz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data.In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
DEFF Research Database (Denmark)
King, Zachary A.; Lu, Justin; Dräger, Andreas
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repo...
Numerical Modeling of Large-Scale Rocky Coastline Evolution
Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.
2008-12-01
Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment
Multi-scale salient feature extraction on mesh models
Yang, Yongliang; Shen, ChaoHui
2012-01-01
We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.
Small-scale engagement model with arrivals: analytical solutions
International Nuclear Information System (INIS)
Engi, D.
1977-04-01
This report presents an analytical model of small-scale battles. The specific impetus for this effort was provided by a need to characterize hypothetical battles between guards at a nuclear facility and their potential adversaries. The solution procedure can be used to find measures of a number of critical parameters; for example, the win probabilities and the expected duration of the battle. Numerical solutions are obtainable if the total number of individual combatants on the opposing sides is less than 10. For smaller force size battles, with one or two combatants on each side, symbolic solutions can be found. The symbolic solutions express the output parameters abstractly in terms of symbolic representations of the input parameters while the numerical solutions are expressed as numerical values. The input parameters are derived from the probability distributions of the attrition and arrival processes. The solution procedure reduces to solving sets of linear equations that have been constructed from the input parameters. The approach presented in this report does not address the problems associated with measuring the inputs. Rather, this report attempts to establish a relatively simple structure within which small-scale battles can be studied
Multi-scale modeling in morphogenesis: a critical analysis of the cellular Potts model.
Directory of Open Access Journals (Sweden)
Anja Voss-Böhme
Full Text Available Cellular Potts models (CPMs are used as a modeling framework to elucidate mechanisms of biological development. They allow a spatial resolution below the cellular scale and are applied particularly when problems are studied where multiple spatial and temporal scales are involved. Despite the increasing usage of CPMs in theoretical biology, this model class has received little attention from mathematical theory. To narrow this gap, the CPMs are subjected to a theoretical study here. It is asked to which extent the updating rules establish an appropriate dynamical model of intercellular interactions and what the principal behavior at different time scales characterizes. It is shown that the longtime behavior of a CPM is degenerate in the sense that the cells consecutively die out, independent of the specific interdependence structure that characterizes the model. While CPMs are naturally defined on finite, spatially bounded lattices, possible extensions to spatially unbounded systems are explored to assess to which extent spatio-temporal limit procedures can be applied to describe the emergent behavior at the tissue scale. To elucidate the mechanistic structure of CPMs, the model class is integrated into a general multiscale framework. It is shown that the central role of the surface fluctuations, which subsume several cellular and intercellular factors, entails substantial limitations for a CPM's exploitation both as a mechanistic and as a phenomenological model.
Wildland Fire Behaviour Case Studies and Fuel Models for Landscape-Scale Fire Modeling
Directory of Open Access Journals (Sweden)
Paul-Antoine Santoni
2011-01-01
Full Text Available This work presents the extension of a physical model for the spreading of surface fire at landscape scale. In previous work, the model was validated at laboratory scale for fire spreading across litters. The model was then modified to consider the structure of actual vegetation and was included in the wildland fire calculation system Forefire that allows converting the two-dimensional model of fire spread to three dimensions, taking into account spatial information. Two wildland fire behavior case studies were elaborated and used as a basis to test the simulator. Both fires were reconstructed, paying attention to the vegetation mapping, fire history, and meteorological data. The local calibration of the simulator required the development of appropriate fuel models for shrubland vegetation (maquis for use with the model of fire spread. This study showed the capabilities of the simulator during the typical drought season characterizing the Mediterranean climate when most wildfires occur.
Directory of Open Access Journals (Sweden)
Dirk Zahn
Full Text Available Fracture mechanisms of an enamel-like hydroxyapatite-collagen composite model are elaborated by means of molecular and coarse-grained dynamics simulation. Using fully atomistic models, we uncover molecular-scale plastic deformation and fracture processes initiated at the organic-inorganic interface. Furthermore, coarse-grained models are developed to investigate fracture patterns at the μm-scale. At the meso-scale, micro-fractures are shown to reduce local stress and thus prevent material failure after loading beyond the elastic limit. On the basis of our multi-scale simulation approach, we provide a molecular scale rationalization of this phenomenon, which seems key to the resilience of hierarchical biominerals, including teeth and bone.
Analysis of the Professional Choice Self-Efficacy Scale Using the Rasch-Andrich Rating Scale Model
Ambiel, Rodolfo A. M.; Noronha, Ana Paula Porto; de Francisco Carvalho, Lucas
2015-01-01
The aim of this research was to analyze the psychometrics properties of the professional choice self-efficacy scale (PCSES), using the Rasch-Andrich rating scale model. The PCSES assesses four factors: self-appraisal, gathering occupational information, practical professional information search and future planning. Participants were 883 Brazilian…
DEFF Research Database (Denmark)
Lavancier, Frédéric; Møller, Jesper
We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...
Validity of scale modeling for large deformations in shipping containers
International Nuclear Information System (INIS)
Burian, R.J.; Black, W.E.; Lawrence, A.A.; Balmert, M.E.
1979-01-01
The principal overall objective of this phase of the continuing program for DOE/ECT is to evaluate the validity of applying scaling relationships to accurately assess the response of unprotected model shipping containers severe impact conditions -- specifically free fall from heights up to 140 ft onto a hard surface in several orientations considered most likely to produce severe damage to the containers. The objective was achieved by studying the following with three sizes of model casks subjected to the various impact conditions: (1) impact rebound response of the containers; (2) structural damage and deformation modes; (3) effect on the containment; (4) changes in shielding effectiveness; (5) approximate free-fall threshold height for various orientations at which excessive damage occurs; (6) the impact orientation(s) that tend to produce the most severe damage; and (7) vunerable aspects of the casks which should be examined. To meet the objective, the tests were intentionally designed to produce extreme structural damage to the cask models. In addition to the principal objective, this phase of the program had the secondary objectives of establishing a scientific data base for assessing the safety and environmental control provided by DOE nuclear shipping containers under impact conditions, and providing experimental data for verification and correlation with dynamic-structural-analysis computer codes being developed by the Los Alamos Scientific Laboratory for DOE/ECT
Monte Carlo modelling of large scale NORM sources using MCNP.
Wallace, J D
2013-12-01
The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Performance Analysis, Modeling and Scaling of HPC Applications and Tools
Energy Technology Data Exchange (ETDEWEB)
Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-01-13
E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.
Cloud-Scale Numerical Modeling of the Arctic Boundary Layer
Krueger, Steven K.
1998-01-01
The interactions between sea ice, open ocean, atmospheric radiation, and clouds over the Arctic Ocean exert a strong influence on global climate. Uncertainties in the formulation of interactive air-sea-ice processes in global climate models (GCMs) result in large differences between the Arctic, and global, climates simulated by different models. Arctic stratus clouds are not well-simulated by GCMs, yet exert a strong influence on the surface energy budget of the Arctic. Leads (channels of open water in sea ice) have significant impacts on the large-scale budgets during the Arctic winter, when they contribute about 50 percent of the surface fluxes over the Arctic Ocean, but cover only 1 to 2 percent of its area. Convective plumes generated by wide leads may penetrate the surface inversion and produce condensate that spreads up to 250 km downwind of the lead, and may significantly affect the longwave radiative fluxes at the surface and thereby the sea ice thickness. The effects of leads and boundary layer clouds must be accurately represented in climate models to allow possible feedbacks between them and the sea ice thickness. The FIRE III Arctic boundary layer clouds field program, in conjunction with the SHEBA ice camp and the ARM North Slope of Alaska and Adjacent Arctic Ocean site, will offer an unprecedented opportunity to greatly improve our ability to parameterize the important effects of leads and boundary layer clouds in GCMs.
Scale Adaptive Simulation Model for the Darrieus Wind Turbine
Rogowski, K.; Hansen, M. O. L.; Maroński, R.; Lichota, P.
2016-09-01
Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads and wake velocity profiles behind the rotor are compared with experimental data taken from literature. The level of agreement between CFD and experimental results is reasonable.
Modeling of large-scale oxy-fuel combustion processes
DEFF Research Database (Denmark)
Yin, Chungen
2012-01-01
Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...
URBAN MORPHOLOGY FOR HOUSTON TO DRIVE MODELS-3/CMAQ AT NEIGHBORHOOD SCALES
Air quality simulation models applied at various horizontal scales require different degrees of treatment in the specifications of the underlying surfaces. As we model neighborhood scales ( 1 km horizontal grid spacing), the representation of urban morphological structures (e....
Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang
2013-01-01
Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...
A methodology for ecosystem-scale modeling of selenium
Presser, T.S.; Luoma, S.N.
2010-01-01
The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determinehow Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure
Large scale solar district heating. Evaluation, modelling and designing
Energy Technology Data Exchange (ETDEWEB)
Heller, A.
2000-07-01
The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application
Land surface evapotranspiration modelling at the regional scale
Raffelli, Giulia; Ferraris, Stefano; Canone, Davide; Previati, Maurizio; Gisolo, Davide; Provenzale, Antonello
2017-04-01
Climate change has relevant implications for the environment, water resources and human life in general. The observed increment of mean air temperature, in addition to a more frequent occurrence of extreme events such as droughts, may have a severe effect on the hydrological cycle. Besides climate change, land use changes are assumed to be another relevant component of global change in terms of impacts on terrestrial ecosystems: socio-economic changes have led to conversions between meadows and pastures and in most cases to a complete abandonment of grasslands. Water is subject to different physical processes among which evapotranspiration (ET) is one of the most significant. In fact, ET plays a key role in estimating crop growth, water demand and irrigation water management, so estimating values of ET can be crucial for water resource planning, irrigation requirement and agricultural production. Potential evapotranspiration (PET) is the amount of evaporation that occurs when a sufficient water source is available. It can be estimated just knowing temperatures (mean, maximum and minimum) and solar radiation. Actual evapotranspiration (AET) is instead the real quantity of water which is consumed by soil and vegetation; it is obtained as a fraction of PET. The aim of this work was to apply a simplified hydrological model to calculate AET for the province of Turin (Italy) in order to assess the water content and estimate the groundwater recharge at a regional scale. The soil is seen as a bucket (FAO56 model, Allen et al., 1998) made of different layers, which interact with water and vegetation. The water balance is given by precipitations (both rain and snow) and dew as positive inputs, while AET, runoff and drainage represent the rate of water escaping from soil. The difference between inputs and outputs is the water stock. Model data inputs are: soil characteristics (percentage of clay, silt, sand, rocks and organic matter); soil depth; the wilting point (i.e. the
Scale Modelling of Nocturnal Cooling in Urban Parks
Spronken-Smith, R. A.; Oke, T. R.
Scale modelling is used to determine the relative contribution of heat transfer processes to the nocturnal cooling of urban parks and the characteristic temporal and spatial variation of surface temperature. Validation is achieved using a hardware model-to-numerical model-to-field observation chain of comparisons. For the calm case, modelling shows that urban-park differences of sky view factor (s) and thermal admittance () are the relevant properties governing the park cool island (PCI) effect. Reduction in sky view factor by buildings and trees decreases the drain of longwave radiation from the surface to the sky. Thus park areas near the perimeter where there may be a line of buildings or trees, or even sites within a park containing tree clumps or individual trees, generally cool less than open areas. The edge effect applies within distances of about 2.2 to 3.5 times the height of the border obstruction, i.e., to have any part of the park cooling at the maximum rate a square park must be at least twice these dimensions in width. Although the central areas of parks larger than this will experience greater cooling they will accumulate a larger volume of cold air that may make it possible for them to initiate a thermal circulation and extend the influence of the park into the surrounding city. Given real world values of s and it seems likely that radiation and conduction play almost equal roles in nocturnal PCI development. Evaporation is not a significant cooling mechanism in the nocturnal calm case but by day it is probably critical in establishing a PCI by sunset. It is likely that conditions that favour PCI by day (tree shade, soil wetness) retard PCI growth at night. The present work, which only deals with PCI growth, cannot predict which type of park will be coolest at night. Complete specification of nocturnal PCI magnitude requires knowledge of the PCI at sunset, and this depends on daytime energetics.
International Nuclear Information System (INIS)
Clemmer, R.G.; Land, R.H.; Maroni, V.A.; Mintz, J.M.
1978-01-01
Although some experience has been gained in the design and construction of 0.5 to 5 m 3 /s air-detritiation systems, little information is available on the performance of these systems under realistic conditions. Recently completed studies at ANL have attempted to provide some perspective on this subject. A time-dependent computer model was developed to study the effects of various reaction and soaking mechanisms that could occur in a typically-sized fusion reactor building (approximately 10 5 m 3 ) following a range of tritium releases (2 to 200 g). In parallel with the computer study, a small (approximately 50 liter) test chamber was set up to investigate cleanup characteristics under conditions which could also be simulated with the computer code. Whereas results of computer analyses indicated that only approximately 10 -3 percent of the tritium released to an ambient enclosure should be converted to tritiated water, the bench-scale experiments gave evidence of conversions to water greater than 1%. Furthermore, although the amounts (both calculated and observed) of soaked-in tritium are usually only a very small fraction of the total tritium release, the soaked tritium is significant, in that its continuous return to the enclosure extends the cleanup time beyond the predicted value in the absence of any soaking mechanisms
Development and testing of watershed-scale models for poorly drained soils
Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya
2005-01-01
Watershed-scale hydrology and water quality models were used to evaluate the crrmulative impacts of land use and management practices on dowrzstream hydrology and nitrogen loading of poorly drained watersheds. Field-scale hydrology and nutrient dyyrutmics are predicted by DRAINMOD in both models. In the first model (DRAINMOD-DUFLOW), field-scale predictions are coupled...
Evaluation of drought propagation in an ensemble mean of large-scale hydrological models
Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.
2012-01-01
Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological
Energy Technology Data Exchange (ETDEWEB)
Lai, W.; McCauley, E.W.
1978-01-04
Results of table-top model experiments performed to investigate pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peach Bottom Mark I boiling water reactor containment system guided subsequent conduct of the 1/5-scale torus experiment and provided new insight into the vertical load function (VLF). Pool dynamics results were qualitatively correct. Experiments with a 1/64-scale fully modeled drywell and torus showed that a 90/sup 0/ torus sector was adequate to reveal three-dimensional effects; the 1/5-scale torus experiment confirmed this.
International Nuclear Information System (INIS)
Lai, W.; McCauley, E.W.
1978-01-01
Results of table-top model experiments performed to investigate pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peach Bottom Mark I boiling water reactor containment system guided subsequent conduct of the 1/5-scale torus experiment and provided new insight into the vertical load function (VLF). Pool dynamics results were qualitatively correct. Experiments with a 1/64-scale fully modeled drywell and torus showed that a 90 0 torus sector was adequate to reveal three-dimensional effects; the 1/5-scale torus experiment confirmed this
Modeling Subgrid Scale Droplet Deposition in Multiphase-CFD
Agostinelli, Giulia; Baglietto, Emilio
2017-11-01
The development of first-principle-based constitutive equations for the Eulerian-Eulerian CFD modeling of annular flow is a major priority to extend the applicability of multiphase CFD (M-CFD) across all two-phase flow regimes. Two key mechanisms need to be incorporated in the M-CFD framework, the entrainment of droplets from the liquid film, and their deposition. Here we focus first on the aspect of deposition leveraging a separate effects approach. Current two-field methods in M-CFD do not include appropriate local closures to describe the deposition of droplets in annular flow conditions. As many integral correlations for deposition have been proposed for lumped parameters methods applications, few attempts exist in literature to extend their applicability to CFD simulations. The integral nature of the approach limits its applicability to fully developed flow conditions, without geometrical or flow variations, therefore negating the scope of CFD application. A new approach is proposed here that leverages local quantities to predict the subgrid-scale deposition rate. The methodology is first tested into a three-field approach CFD model.
Reconciliation of high energy scale models of inflation with Planck
International Nuclear Information System (INIS)
Ashoorioon, Amjad; Dimopoulos, Konstantinos; Sheikh-Jabbari, M.M.; Shiu, Gary
2014-01-01
The inflationary cosmology paradigm is very successful in explaining the CMB anisotropy to the percent level. Besides the dependence on the inflationary model, the power spectra, spectral tilt and non-Gaussianity of the CMB temperature fluctuations also depend on the initial state of inflation. Here, we examine to what extent these observables are affected by our ignorance in the initial condition for inflationary perturbations, due to unknown new physics at a high scale M. For initial states that satisfy constraints from backreaction, we find that the amplitude of the power spectra could still be significantly altered, while the modification in bispectrum remains small. For such initial states, M has an upper bound of a few tens of H, with H being the Hubble parameter during inflation. We show that for M ∼ 20H, such initial states always (substantially) suppress the tensor to scalar ratio. In particular we show that such a choice of initial conditions can satisfactorily reconcile the simple ½m 2 φ 2 chaotic model with the Planck data [1-3
ESCOMPTE 2001: multi-scale modelling and experimental validation
Cousin, F.; Tulet, P.; Rosset, R.
2003-04-01
ESCOMPTE is a European pollution field experiment located in the Marseille / Fos-Berre area in the summer 2001.This Mediterranean area, with frequent pollution peaks, is characterized by a complex topography subject to sea breeze regimes, together with intense localized urban, industrial and biogenic sources. Four POI have been selected, the most significant being POI2a / b, a 6-day pollution episode extensively documented for dynamics, radiation, gas phase and aerosols, with surface measurements (including measurements at sea in the gulf of Genoa, on board instrumented ferries between Marseille and Corsica), 7 aircrafts, lidar, radar and constant-level flight balloon soundings. The two-way mesoscale model MESO-NH-C (MNH-C), with horizontal resolutions of 9 and 3 km and high vertical resolution (up to 40 levels in the first 2 km), embedded in the global CTM Mocage, has been run for all POIs, with a focus here on POI2b (June 24-27,2001), a typical high pollution episode. The multi-scale modelling system MNH-C+MOCAGE allows to simulate local and regional pollution issued from emission sources in the Marseille / Fos-Berre area as well as from remote sources (e.g. the Po Valley and / or western Mediterranean sources) and their associated transboundary pollution fluxes. Detailed dynamical, chemical and aerosol (both modal and sectional spectra with organics and inorganics) simulations generally favorably compare to surface(continental and on ships), lidar and along-flight aircraft measurements.
Numerically modelling the large scale coronal magnetic field
Panja, Mayukh; Nandi, Dibyendu
2016-07-01
The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.
ADAPTIVE TEXTURE SYNTHESIS FOR LARGE SCALE CITY MODELING
Directory of Open Access Journals (Sweden)
G. Despine
2015-02-01
Full Text Available Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.
Adaptive Texture Synthesis for Large Scale City Modeling
Despine, G.; Colleu, T.
2015-02-01
Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.
Curtis, Gary P.; Kohler, Matthias; Kannappan, Ramakrishnan; Briggs, Martin A.; Day-Lewis, Frederick D.
2015-01-01
Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research were to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.
SMR Re-Scaling and Modeling for Load Following Studies
Energy Technology Data Exchange (ETDEWEB)
Hoover, K.; Wu, Q.; Bragg-Sitton, S.
2016-11-01
This study investigates the creation of a new set of scaling parameters for the Oregon State University Multi-Application Small Light Water Reactor (MASLWR) scaled thermal hydraulic test facility. As part of a study being undertaken by Idaho National Lab involving nuclear reactor load following characteristics, full power operations need to be simulated, and therefore properly scaled. Presented here is the scaling analysis and plans for RELAP5-3D simulation.
With Scale in Mind: A Continuous Improvement Model for Implementation
Redding, Christopher; Cannata, Marisa; Taylor Haynes, Katherine
2017-01-01
The conventional approach to scaling up educational reforms considers the development and testing phases to be distinct from the work of implementing at scale. Decades of research suggest that this approach yields inconsistent and often disappointing improvements for schools most in need. More recent scholarship on scaling school improvement…
Meso-scale modeling of irradiated concrete in test reactor
International Nuclear Information System (INIS)
Giorla, A.; Vaitová, M.; Le Pape, Y.; Štemberk, P.
2015-01-01
Highlights: • A meso-scale finite element model for irradiated concrete is developed. • Neutron radiation-induced volumetric expansion is a predominant degradation mode. • Confrontation with expansion and damage obtained from experiments is successful. • Effects of paste shrinkage, creep and ductility are discussed. - Abstract: A numerical model accounting for the effects of neutron irradiation on concrete at the mesoscale is detailed in this paper. Irradiation experiments in test reactor (Elleuch et al., 1972), i.e., in accelerated conditions, are simulated. Concrete is considered as a two-phase material made of elastic inclusions (aggregate) subjected to thermal and irradiation-induced swelling and embedded in a cementitious matrix subjected to shrinkage and thermal expansion. The role of the hardened cement paste in the post-peak regime (brittle-ductile transition with decreasing loading rate), and creep effects are investigated. Radiation-induced volumetric expansion (RIVE) of the aggregate cause the development and propagation of damage around the aggregate which further develops in bridging cracks across the hardened cement paste between the individual aggregate particles. The development of damage is aggravated when shrinkage occurs simultaneously with RIVE during the irradiation experiment. The post-irradiation expansion derived from the simulation is well correlated with the experimental data and, the obtained damage levels are fully consistent with previous estimations based on a micromechanical interpretation of the experimental post-irradiation elastic properties (Le Pape et al., 2015). The proposed modeling opens new perspectives for the interpretation of test reactor experiments in regards to the actual operation of light water reactors.
Meso-scale modeling of irradiated concrete in test reactor
Energy Technology Data Exchange (ETDEWEB)
Giorla, A. [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Vaitová, M. [Czech Technical University, Thakurova 7, 166 29 Praha 6 (Czech Republic); Le Pape, Y., E-mail: lepapeym@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Štemberk, P. [Czech Technical University, Thakurova 7, 166 29 Praha 6 (Czech Republic)
2015-12-15
Highlights: • A meso-scale finite element model for irradiated concrete is developed. • Neutron radiation-induced volumetric expansion is a predominant degradation mode. • Confrontation with expansion and damage obtained from experiments is successful. • Effects of paste shrinkage, creep and ductility are discussed. - Abstract: A numerical model accounting for the effects of neutron irradiation on concrete at the mesoscale is detailed in this paper. Irradiation experiments in test reactor (Elleuch et al., 1972), i.e., in accelerated conditions, are simulated. Concrete is considered as a two-phase material made of elastic inclusions (aggregate) subjected to thermal and irradiation-induced swelling and embedded in a cementitious matrix subjected to shrinkage and thermal expansion. The role of the hardened cement paste in the post-peak regime (brittle-ductile transition with decreasing loading rate), and creep effects are investigated. Radiation-induced volumetric expansion (RIVE) of the aggregate cause the development and propagation of damage around the aggregate which further develops in bridging cracks across the hardened cement paste between the individual aggregate particles. The development of damage is aggravated when shrinkage occurs simultaneously with RIVE during the irradiation experiment. The post-irradiation expansion derived from the simulation is well correlated with the experimental data and, the obtained damage levels are fully consistent with previous estimations based on a micromechanical interpretation of the experimental post-irradiation elastic properties (Le Pape et al., 2015). The proposed modeling opens new perspectives for the interpretation of test reactor experiments in regards to the actual operation of light water reactors.
Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model
Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko
2015-04-01
One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1
Downstream fish passage guide walls: A hydraulic scale model analysis
Mulligan, Kevin; Towler, Brett; Haro, Alexander J.; Ahlfeld, David P.
2018-01-01
Partial-depth guide walls are used to improve passage efficiency and reduce the delay of out-migrating anadromous fish species by guiding fish to a bypass route (i.e. weir, pipe, sluice gate) that circumvents the turbine intakes, where survival is usually lower. Evaluation and monitoring studies, however, indicate a high propensity for some fish to pass underneath, rather than along, the guide walls, compromising their effectiveness. In the present study we evaluated a range of guide wall structures to identify where/if the flow field shifts from sweeping (i.e. flow direction primarily along the wall and towards the bypass) to downward-dominant. Many migratory fish species, particularly juveniles, are known to drift with the flow and/or exhibit rheotactic behaviour during their migration. When these behaviours are present, fish follow the path of the flow field. Hence, maintaining a strong sweeping velocity in relation to the downward velocity along a guide wall is essential to successful fish guidance. Nine experiments were conducted to measure the three-dimensional velocity components upstream of a scale model guide wall set at a wide range of depths and angles to flow. Results demonstrated how each guide wall configuration affected the three-dimensional velocity components, and hence the downward and sweeping velocity, along the full length of the guide wall. In general, the velocities produced in the scale model were sweeping dominant near the water surface and either downward dominant or close to the transitional depth near the bottom of the guide wall. The primary exception to this shift from sweeping do downward flow was for the minimum guide wall angle tested in this study (15°). At 15° the flow pattern was fully sweeping dominant for every cross-section, indicating that a guide wall with a relatively small angle may be more likely to produce conditions favorable to efficient guidance. A critical next step is to evaluate the behaviour of migratory fish as
International Nuclear Information System (INIS)
Ohyama, Takuya; Saegusa, Hiromitsu; Onoe, Hironori
2005-05-01
Japan Nuclear Cycle Development Institute has been conducting a wide range of geoscientific research in order to build a foundation for multidisciplinary studies of the deep geological environment as a basis of research and development for geological disposal of nuclear wastes. Ongoing geoscientific research programs include the Regional Hydrogeological Study (RHS) project and Mizunami Underground Research Laboratory (MIU) project in the Tono region, Gifu Prefecture. The main goal of these projects is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment at several spatial scales. The RHS project is a local scale study for understanding the groundwater flow system from the recharge area to the discharge area. The surface-based Investigation Phase of the MIU project is a site scale study for understanding the groundwater flow system immediately surrounding the MIU construction site. The MIU project is being conducted using a multiphase, iterative approach. In this study, the hydrogeological modeling and groundwater flow analysis of the local scale were carried out in order to set boundary conditions of the site scale model based on the data obtained from surface-based investigations in Step 1 in site scale of the MIU project. As a result of the study, head distribution to set boundary conditions for groundwater flow analysis on the site scale model could be obtained. (author)
Anomalous Scaling Behaviors in a Rice-Pile Model with Two Different Driving Mechanisms
International Nuclear Information System (INIS)
Zhang Duanming; Sun Hongzhang; Li Zhihua; Pan Guijun; Yu Boming; Li Rui; Yin Yanping
2005-01-01
The moment analysis is applied to perform large scale simulations of the rice-pile model. We find that this model shows different scaling behavior depending on the driving mechanism used. With the noisy driving, the rice-pile model violates the finite-size scaling hypothesis, whereas, with fixed driving, it shows well defined avalanche exponents and displays good finite size scaling behavior for the avalanche size and time duration distributions.
Characteristic length scale of input data in distributed models: implications for modeling grid size
Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.
2000-01-01
The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.
Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.
2000-01-01
The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.
International Nuclear Information System (INIS)
Huerta, M.
1981-06-01
This report describes the mathematical analysis, the physical scale modeling, and a full-scale crash test of a railcar spent-nuclear-fuel shipping system. The mathematical analysis utilized a lumped-parameter model to predict the structural response of the railcar and the shipping cask. The physical scale modeling analysis consisted of two crash tests that used 1/8-scale models to assess railcar and shipping cask damage. The full-scale crash test, conducted with retired railcar equipment, was carefully monitored with onboard instrumentation and high-speed photography. Results of the mathematical and scale modeling analyses are compared with the full-scale test. 29 figures
Dynamic subgrid scale model of large eddy simulation of cross bundle flows
International Nuclear Information System (INIS)
Hassan, Y.A.; Barsamian, H.R.
1996-01-01
The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization
A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media
Chen, Huangxin; Sun, Shuyu
2016-01-01
scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved
Multi-scale habitat selection modeling: A review and outlook
Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman
2016-01-01
Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.
Hydrogeologic Framework Model for the Saturated Zone Site Scale flow and Transport Model
Energy Technology Data Exchange (ETDEWEB)
T. Miller
2004-11-15
The purpose of this report is to document the 19-unit, hydrogeologic framework model (19-layer version, output of this report) (HFM-19) with regard to input data, modeling methods, assumptions, uncertainties, limitations, and validation of the model results in accordance with AP-SIII.10Q, Models. The HFM-19 is developed as a conceptual model of the geometric extent of the hydrogeologic units at Yucca Mountain and is intended specifically for use in the development of the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]). Primary inputs to this model report include the GFM 3.1 (DTN: MO9901MWDGFM31.000 [DIRS 103769]), borehole lithologic logs, geologic maps, geologic cross sections, water level data, topographic information, and geophysical data as discussed in Section 4.1. Figure 1-1 shows the information flow among all of the saturated zone (SZ) reports and the relationship of this conceptual model in that flow. The HFM-19 is a three-dimensional (3-D) representation of the hydrogeologic units surrounding the location of the Yucca Mountain geologic repository for spent nuclear fuel and high-level radioactive waste. The HFM-19 represents the hydrogeologic setting for the Yucca Mountain area that covers about 1,350 km2 and includes a saturated thickness of about 2.75 km. The boundaries of the conceptual model were primarily chosen to be coincident with grid cells in the Death Valley regional groundwater flow model (DTN: GS960808312144.003 [DIRS 105121]) such that the base of the site-scale SZ flow model is consistent with the base of the regional model (2,750 meters below a smoothed version of the potentiometric surface), encompasses the exploratory boreholes, and provides a framework over the area of interest for groundwater flow and radionuclide transport modeling. In depth, the model domain extends from land surface to the base of the regional groundwater flow model (D'Agnese et al. 1997 [DIRS 100131], p 2). For the site-scale
Hydrogeologic Framework Model for the Saturated Zone Site Scale flow and Transport Model
International Nuclear Information System (INIS)
Miller, T.
2004-01-01
The purpose of this report is to document the 19-unit, hydrogeologic framework model (19-layer version, output of this report) (HFM-19) with regard to input data, modeling methods, assumptions, uncertainties, limitations, and validation of the model results in accordance with AP-SIII.10Q, Models. The HFM-19 is developed as a conceptual model of the geometric extent of the hydrogeologic units at Yucca Mountain and is intended specifically for use in the development of the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]). Primary inputs to this model report include the GFM 3.1 (DTN: MO9901MWDGFM31.000 [DIRS 103769]), borehole lithologic logs, geologic maps, geologic cross sections, water level data, topographic information, and geophysical data as discussed in Section 4.1. Figure 1-1 shows the information flow among all of the saturated zone (SZ) reports and the relationship of this conceptual model in that flow. The HFM-19 is a three-dimensional (3-D) representation of the hydrogeologic units surrounding the location of the Yucca Mountain geologic repository for spent nuclear fuel and high-level radioactive waste. The HFM-19 represents the hydrogeologic setting for the Yucca Mountain area that covers about 1,350 km2 and includes a saturated thickness of about 2.75 km. The boundaries of the conceptual model were primarily chosen to be coincident with grid cells in the Death Valley regional groundwater flow model (DTN: GS960808312144.003 [DIRS 105121]) such that the base of the site-scale SZ flow model is consistent with the base of the regional model (2,750 meters below a smoothed version of the potentiometric surface), encompasses the exploratory boreholes, and provides a framework over the area of interest for groundwater flow and radionuclide transport modeling. In depth, the model domain extends from land surface to the base of the regional groundwater flow model (D'Agnese et al. 1997 [DIRS 100131], p 2). For the site-scale SZ flow model, the HFM
Multi-scale modelling of fatigue microcrack initiation
International Nuclear Information System (INIS)
Liu, Jia
2013-01-01
The thesis aims to improve the understanding and simulation of microcrack initiation induced by thermal fatigue and the induced crack network formation. The polycrystalline simulations allow the prediction of both macroscopic cyclic behavior and mean grain distributions of stress, plastic strain and number of cycles to microcrack initiation. Various aggregate meshes have been used, from the simplest ones using cubic grains up to a real 3D aggregate built thanks to many re-polishing and EBSD measurement sequences (Institut P', Poitiers). Tension-compression, cyclic shear and equi-biaxial loadings, with and without mean strain, have been considered. All the predictions are in qualitative agreement with many experimental observations obtained at various scales. The single crystal simulations allow us to predict the effect of slip localization in thin persistent slip bands (PSBs). Inside PSBs, vacancies are produced and annihilated because of cyclic dislocation interactions and may diffuse towards the surrounding matrix. This induces extrusion growth at the free surface of PSBs. Microcracking is modelled by cohesive zones located along the PSB - matrix interfaces. The predicted extrusion rates and numbers of cycles to microcrack initiation are in fair agreement with numerous experimental data concerning single and polycrystals, copper and 316L(N), under either air or inert environment. (author) [fr
Scale-adaptive surface modeling of vascular structures
Directory of Open Access Journals (Sweden)
Ma Xin
2010-11-01
Full Text Available Abstract Background The effective geometric modeling of vascular structures is crucial for diagnosis, therapy planning and medical education. These applications require good balance with respect to surface smoothness, surface accuracy, triangle quality and surface size. Methods Our method first extracts the vascular boundary voxels from the segmentation result, and utilizes these voxels to build a three-dimensional (3D point cloud whose normal vectors are estimated via covariance analysis. Then a 3D implicit indicator function is computed from the oriented 3D point cloud by solving a Poisson equation. Finally the vessel surface is generated by a proposed adaptive polygonization algorithm for explicit 3D visualization. Results Experiments carried out on several typical vascular structures demonstrate that the presented method yields both a smooth morphologically correct and a topologically preserved two-manifold surface, which is scale-adaptive to the local curvature of the surface. Furthermore, the presented method produces fewer and better-shaped triangles with satisfactory surface quality and accuracy. Conclusions Compared to other state-of-the-art approaches, our method reaches good balance in terms of smoothness, accuracy, triangle quality and surface size. The vessel surfaces produced by our method are suitable for applications such as computational fluid dynamics simulations and real-time virtual interventional surgery.
Implementation of meso-scale radioactive dispersion model for GPU
Energy Technology Data Exchange (ETDEWEB)
Sunarko [National Nuclear Energy Agency of Indonesia (BATAN), Jakarta (Indonesia). Nuclear Energy Assessment Center; Suud, Zaki [Bandung Institute of Technology (ITB), Bandung (Indonesia). Physics Dept.
2017-05-15
Lagrangian Particle Dispersion Method (LPDM) is applied to model atmospheric dispersion of radioactive material in a meso-scale of a few tens of kilometers for site study purpose. Empirical relationships are used to determine the dispersion coefficient for various atmospheric stabilities. Diagnostic 3-D wind-field is solved based on data from one meteorological station using mass-conservation principle. Particles representing radioactive pollutant are dispersed in the wind-field as a point source. Time-integrated air concentration is calculated using kernel density estimator (KDE) in the lowest layer of the atmosphere. Parallel code is developed for GTX-660Ti GPU with a total of 1 344 scalar processors using CUDA. A test of 1-hour release discovers that linear speedup is achieved starting at 28 800 particles-per-hour (pph) up to about 20 x at 14 4000 pph. Another test simulating 6-hour release with 36 000 pph resulted in a speedup of about 60 x. Statistical analysis reveals that resulting grid doses are nearly identical in both CPU and GPU versions of the code.
Reactor scale modeling of multi-walled carbon nanotube growth
International Nuclear Information System (INIS)
Lombardo, Jeffrey J.; Chiu, Wilson K.S.
2011-01-01
As the mechanisms of carbon nanotube (CNT) growth becomes known, it becomes important to understand how to implement this knowledge into reactor scale models to optimize CNT growth. In past work, we have reported fundamental mechanisms and competing deposition regimes that dictate single wall carbon nanotube growth. In this study, we will further explore the growth of carbon nanotubes with multiple walls. A tube flow chemical vapor deposition reactor is simulated using the commercial software package COMSOL, and considered the growth of single- and multi-walled carbon nanotubes. It was found that the limiting reaction processes for multi-walled carbon nanotubes change at different temperatures than the single walled carbon nanotubes and it was shown that the reactions directly governing CNT growth are a limiting process over certain parameters. This work shows that the optimum conditions for CNT growth are dependent on temperature, chemical concentration, and the number of nanotube walls. Optimal reactor conditions have been identified as defined by (1) a critical inlet methane concentration that results in hydrogen abstraction limited versus hydrocarbon adsorption limited reaction kinetic regime, and (2) activation energy of reaction for a given reactor temperature and inlet methane concentration. Successful optimization of a CNT growth processes requires taking all of those variables into account.
Directory of Open Access Journals (Sweden)
R. Barthel
2006-01-01
Full Text Available Model coupling requires a thorough conceptualisation of the coupling strategy, including an exact definition of the individual model domains, the "transboundary" processes and the exchange parameters. It is shown here that in the case of coupling groundwater flow and hydrological models – in particular on the regional scale – it is very important to find a common definition and scale-appropriate process description of groundwater recharge and baseflow (or "groundwater runoff/discharge" in order to achieve a meaningful representation of the processes that link the unsaturated and saturated zones and the river network. As such, integration by means of coupling established disciplinary models is problematic given that in such models, processes are defined from a purpose-oriented, disciplinary perspective and are therefore not necessarily consistent with definitions of the same process in the model concepts of other disciplines. This article contains a general introduction to the requirements and challenges of model coupling in Integrated Water Resources Management including a definition of the most relevant technical terms, a short description of the commonly used approach of model coupling and finally a detailed consideration of the role of groundwater recharge and baseflow in coupling groundwater models with hydrological models. The conclusions summarize the most relevant problems rather than giving practical solutions. This paper aims to point out that working on a large scale in an integrated context requires rethinking traditional disciplinary workflows and encouraging communication between the different disciplines involved. It is worth noting that the aspects discussed here are mainly viewed from a groundwater perspective, which reflects the author's background.
Altmoos, Michael; Henle, Klaus
2010-11-01
Habitat models for animal species are important tools in conservation planning. We assessed the need to consider several scales in a case study for three amphibian and two grasshopper species in the post-mining landscapes near Leipzig (Germany). The two species groups were selected because habitat analyses for grasshoppers are usually conducted on one scale only whereas amphibians are thought to depend on more than one spatial scale. First, we analysed how the preference to single habitat variables changed across nested scales. Most environmental variables were only significant for a habitat model on one or two scales, with the smallest scale being particularly important. On larger scales, other variables became significant, which cannot be recognized on lower scales. Similar preferences across scales occurred in only 13 out of 79 cases and in 3 out of 79 cases the preference and avoidance for the same variable were even reversed among scales. Second, we developed habitat models by using a logistic regression on every scale and for all combinations of scales and analysed how the quality of habitat models changed with the scales considered. To achieve a sufficient accuracy of the habitat models with a minimum number of variables, at least two scales were required for all species except for Bufo viridis, for which a single scale, the microscale, was sufficient. Only for the European tree frog ( Hyla arborea), at least three scales were required. The results indicate that the quality of habitat models increases with the number of surveyed variables and with the number of scales, but costs increase too. Searching for simplifications in multi-scaled habitat models, we suggest that 2 or 3 scales should be a suitable trade-off, when attempting to define a suitable microscale.
Energy Technology Data Exchange (ETDEWEB)
Freed, Alan D.; Einstein, Daniel R.; Carson, James P.; Jacob, Rick E.
2012-03-01
In the first year of this contractual effort a hypo-elastic constitutive model was developed and shown to have great potential in modeling the elastic response of parenchyma. This model resides at the macroscopic level of the continuum. In this, the second year of our support, an isotropic dodecahedron is employed as an alveolar model. This is a microscopic model for parenchyma. A hopeful outcome is that the linkage between these two scales of modeling will be a source of insight and inspiration that will aid us in the final year's activity: creating a viscoelastic model for parenchyma.
Scaling analysis for a Savannah River reactor scaled model integral system
International Nuclear Information System (INIS)
Boucher, T.J.; Larson, T.K.; McCreery, G.E.; Anderson, J.L.
1990-11-01
801The Savannah River Laboratory has requested that the Idaho National Engineering Laboratory perform an analysis to help define, examine, and assess potential concepts for the design of a scaled integral hydraulics test facility representative of the current Savannah River Plant reactor design. In this report the thermal-hydraulic phenomena of importance (based on the knowledge and experience of the authors and the results of the joint INEL/TPG/SRL phenomena identification and ranking effort) to reactor safety during the design basis loss-of-coolant accident were examined and identified. Established scaling methodologies were used to develop potential concepts for integral hydraulic testing facilities. Analysis is conducted to examine the scaling of various phenomena in each of the selected concepts. Results generally support that a one-fourth (1/4) linear scale visual facility capable of operating at pressures up to 350 kPa (51 psia) and temperatures up to 330 K (134 degree F) will scale most hydraulic phenomena reasonably well. However, additional research will be necessary to determine the most appropriate method of simulating several of the reactor components, since the scaling methodology allows for several approaches which may only be assessed via appropriate research. 34 refs., 20 figs., 14 tabs
RESOLVING NEIGHBORHOOD-SCALE AIR TOXICS MODELING: A CASE STUDY IN WILMINGTON, CALIFORNIA
Air quality modeling is useful for characterizing exposures to air pollutants. While models typically provide results on regional scales, there is a need for refined modeling approaches capable of resolving concentrations on the scale of tens of meters, across modeling domains 1...
Updating of a dynamic finite element model from the Hualien scale model reactor building
International Nuclear Information System (INIS)
Billet, L.; Moine, P.; Lebailly, P.
1996-08-01
The forces occurring at the soil-structure interface of a building have generally a large influence on the way the building reacts to an earthquake. One can be tempted to characterise these forces more accurately bu updating a model from the structure. However, this procedure requires an updating method suitable for dissipative models, since significant damping can be observed at the soil-structure interface of buildings. Such a method is presented here. It is based on the minimization of a mechanical energy built from the difference between Eigen data calculated bu the model and Eigen data issued from experimental tests on the real structure. An experimental validation of this method is then proposed on a model from the HUALIEN scale-model reactor building. This scale-model, built on the HUALIEN site of TAIWAN, is devoted to the study of soil-structure interaction. The updating concerned the soil impedances, modelled by a layer of springs and viscous dampers attached to the building foundation. A good agreement was found between the Eigen modes and dynamic responses calculated bu the updated model and the corresponding experimental data. (authors). 12 refs., 3 figs., 4 tabs
A laboratory scale model of abrupt ice-shelf disintegration
Macayeal, D. R.; Boghosian, A.; Styron, D. D.; Burton, J. C.; Amundson, J. M.; Cathles, L. M.; Abbot, D. S.
2010-12-01
An important mode of Earth’s disappearing cryosphere is the abrupt disintegration of ice shelves along the Peninsula of Antarctica. This disintegration process may be triggered by climate change, however the work needed to produce the spectacular, explosive results witnessed with the Larsen B and Wilkins ice-shelf events of the last decade comes from the large potential energy release associated with iceberg capsize and fragmentation. To gain further insight into the underlying exchanges of energy involved in massed iceberg movements, we have constructed a laboratory-scale model designed to explore the physical and hydrodynamic interactions between icebergs in a confined channel of water. The experimental apparatus consists of a 2-meter water tank that is 30 cm wide. Within the tank, we introduce fresh water and approximately 20-100 rectangular plastic ‘icebergs’ having the appropriate density contrast with water to mimic ice. The blocks are initially deployed in a tight pack, with all blocks arranged in a manner to represent the initial state of an integrated ice shelf or ice tongue. The system is allowed to evolve through time under the driving forces associated with iceberg hydrodynamics. Digitized videography is used to quantify how the system of plastic icebergs evolves between states of quiescence to states of mobilization. Initial experiments show that, after a single ‘agitator’ iceberg begins to capsize, an ‘avalanche’ of capsizing icebergs ensues which drives horizontal expansion of the massed icebergs across the water surface, and which stimulates other icebergs to capsize. A surprise initially evident in the experiments is the fact that the kinetic energy of the expanding mass of icebergs is only a small fraction of the net potential energy released by the rearrangement of mass via capsize. Approximately 85 - 90 % of the energy released by the system goes into water motion modes, including a pervasive, easily observed seich mode of the tank
Multi-scale modeling of dispersed gas-liquid two-phase flow
Deen, N.G.; Sint Annaland, van M.; Kuipers, J.A.M.
2004-01-01
In this work the concept of multi-scale modeling is demonstrated. The idea of this approach is to use different levels of modeling, each developed to study phenomena at a certain length scale. Information obtained at the level of small length scales can be used to provide closure information at the
Improving Shade Modelling in a Regional River Temperature Model Using Fine-Scale LIDAR Data
Hannah, D. M.; Loicq, P.; Moatar, F.; Beaufort, A.; Melin, E.; Jullian, Y.
2015-12-01
Air temperature is often considered as a proxy of the stream temperature to model the distribution areas of aquatic species water temperature is not available at a regional scale. To simulate the water temperature at a regional scale (105 km²), a physically-based model using the equilibrium temperature concept and including upstream-downstream propagation of the thermal signal was developed and applied to the entire Loire basin (Beaufort et al., submitted). This model, called T-NET (Temperature-NETwork) is based on a hydrographical network topology. Computations are made hourly on 52,000 reaches which average 1.7 km long in the Loire drainage basin. The model gives a median Root Mean Square Error of 1.8°C at hourly time step on the basis of 128 water temperature stations (2008-2012). In that version of the model, tree shadings is modelled by a constant factor proportional to the vegetation cover on 10 meters sides the river reaches. According to sensitivity analysis, improving the shade representation would enhance T-NET accuracy, especially for the maximum daily temperatures, which are currently not very well modelized. This study evaluates the most efficient way (accuracy/computing time) to improve the shade model thanks to 1-m resolution LIDAR data available on tributary of the LoireRiver (317 km long and an area of 8280 km²). Two methods are tested and compared: the first one is a spatially explicit computation of the cast shadow for every LIDAR pixel. The second is based on averaged vegetation cover characteristics of buffers and reaches of variable size. Validation of the water temperature model is made against 4 temperature sensors well spread along the stream, as well as two airborne thermal infrared imageries acquired in summer 2014 and winter 2015 over a 80 km reach. The poster will present the optimal length- and crosswise scale to characterize the vegetation from LIDAR data.
A numerical model for dynamic crustal-scale fluid flow
Sachau, Till; Bons, Paul; Gomez-Rivas, Enrique; Koehn, Daniel
2015-04-01
Fluid flow in the crust is often envisaged and modeled as continuous, yet minimal flow, which occurs over large geological times. This is a suitable approximation for flow as long as it is solely controlled by the matrix permeability of rocks, which in turn is controlled by viscous compaction of the pore space. However, strong evidence (hydrothermal veins and ore deposits) exists that a significant part of fluid flow in the crust occurs strongly localized in both space and time, controlled by the opening and sealing of hydrofractures. We developed, tested and applied a novel computer code, which considers this dynamic behavior and couples it with steady, Darcian flow controlled by the matrix permeability. In this dual-porosity model, fractures open depending on the fluid pressure relative to the solid pressure. Fractures form when matrix permeability is insufficient to accommodate fluid flow resulting from compaction, decompression (Staude et al. 2009) or metamorphic dehydration reactions (Weisheit et al. 2013). Open fractures can close when the contained fluid either seeps into the matrix or escapes by fracture propagation: mobile hydrofractures (Bons, 2001). In the model, closing and sealing of fractures is controlled by a time-dependent viscous law, which is based on the effective stress and on either Newtonian or non-Newtonian viscosity. Our simulations indicate that the bulk of crustal fluid flow in the middle to lower upper crust is intermittent, highly self-organized, and occurs as mobile hydrofractures. This is due to the low matrix porosity and permeability, combined with a low matrix viscosity and, hence, fast sealing of fractures. Stable fracture networks, generated by fluid overpressure, are restricted to the uppermost crust. Semi-stable fracture networks can develop in an intermediate zone, if a critical overpressure is reached. Flow rates in mobile hydrofractures exceed those in the matrix porosity and fracture networks by orders of magnitude
Replica scale modelling of long rod tank penetrators
Diederen, A.M.; Hoeneveld, J.C.
2001-01-01
Experiments and simulations have been conducted using scale size tungsten alloy penetrators at ordnance velocity against an oblique plate array consisting of an inert sandwich and a base armour. The penetrators are made from 2 types of tungsten alloy with different tensile strength. Two scale sizes
Ares I Scale Model Acoustic Test Instrumentation for Acoustic and Pressure Measurements
Vargas, Magda B.; Counter, Douglas
2011-01-01
Ares I Scale Model Acoustic Test (ASMAT) is a 5% scale model test of the Ares I vehicle, launch pad and support structures conducted at MSFC to verify acoustic and ignition environments and evaluate water suppression systems Test design considerations 5% measurements must be scaled to full scale requiring high frequency measurements Users had different frequencies of interest Acoustics: 200 - 2,000 Hz full scale equals 4,000 - 40,000 Hz model scale Ignition Transient: 0 - 100 Hz full scale equals 0 - 2,000 Hz model scale Environment exposure Weather exposure: heat, humidity, thunderstorms, rain, cold and snow Test environments: Plume impingement heat and pressure, and water deluge impingement Several types of sensors were used to measure the environments Different instrument mounts were used according to the location and exposure to the environment This presentation addresses the observed effects of the selected sensors and mount design on the acoustic and pressure measurements
DRIFT-SCALE COUPLED PROCESSES (DST AND TH SEEPAGE) MODELS
International Nuclear Information System (INIS)
J.T. Birkholzer; S. Mukhopadhyay
2005-01-01
The purpose of this report is to document drift-scale modeling work performed to evaluate the thermal-hydrological (TH) behavior in Yucca Mountain fractured rock close to waste emplacement drifts. The heat generated by the decay of radioactive waste results in rock temperatures elevated from ambient for thousands of years after emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, giving rise to water redistribution and altered flow paths. The predictive simulations described in this report are intended to investigate fluid flow in the vicinity of an emplacement drift for a range of thermal loads. Understanding the TH coupled processes is important for the performance of the repository because the thermally driven water saturation changes affect the potential seepage of water into waste emplacement drifts. Seepage of water is important because if enough water gets into the emplacement drifts and comes into contact with any exposed radionuclides, it may then be possible for the radionuclides to be transported out of the drifts and to the groundwater below the drifts. For above-boiling rock temperatures, vaporization of percolating water in the fractured rock overlying the repository can provide an important barrier capability that greatly reduces (and possibly eliminates) the potential of water seeping into the emplacement drifts. In addition to this thermal process, water is inhibited from entering the drift opening by capillary forces, which occur under both ambient and thermal conditions (capillary barrier). The combined barrier capability of vaporization processes and capillary forces in the near-field rock during the thermal period of the repository is analyzed and discussed in this report
Materials and nanosystems : interdisciplinary computational modeling at multiple scales
International Nuclear Information System (INIS)
Huber, S.E.
2014-01-01
Over the last five decades, computer simulation and numerical modeling have become valuable tools complementing the traditional pillars of science, experiment and theory. In this thesis, several applications of computer-based simulation and modeling shall be explored in order to address problems and open issues in chemical and molecular physics. Attention shall be paid especially to the different degrees of interrelatedness and multiscale-flavor, which may - at least to some extent - be regarded as inherent properties of computational chemistry. In order to do so, a variety of computational methods are used to study features of molecular systems which are of relevance in various branches of science and which correspond to different spatial and/or temporal scales. Proceeding from small to large measures, first, an application in astrochemistry, the investigation of spectroscopic and energetic aspects of carbonic acid isomers shall be discussed. In this respect, very accurate and hence at the same time computationally very demanding electronic structure methods like the coupled-cluster approach are employed. These studies are followed by the discussion of an application in the scope of plasma-wall interaction which is related to nuclear fusion research. There, the interactions of atoms and molecules with graphite surfaces are explored using density functional theory methods. The latter are computationally cheaper than coupled-cluster methods and thus allow the treatment of larger molecular systems, but yield less accuracy and especially reduced error control at the same time. The subsequently presented exploration of surface defects at low-index polar zinc oxide surfaces, which are of interest in materials science and surface science, is another surface science application. The necessity to treat even larger systems of several hundreds of atoms requires the use of approximate density functional theory methods. Thin gold nanowires consisting of several thousands of
Gomez, Rapson; Watson, Shaun D.
2017-01-01
For the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) together, this study examined support for a bifactor model, and also the internal consistency reliability and external validity of the factors in this model. Participants (N = 526) were adults from the general community who completed the SPS and SIAS. Confirmatory factor analysis (CFA) of their ratings indicated good support for the bifactor model. For this model, the loadings for all but six items were higher on the general factor than the specific factors. The three positively worded items had negligible loadings on the general factor. The general factor explained most of the common variance in the SPS and SIAS, and demonstrated good model-based internal consistency reliability (omega hierarchical) and a strong association with fear of negative evaluation and extraversion. The practical implications of the findings for the utilization of the SPS and SIAS, and the theoretical and clinical implications for social anxiety are discussed. PMID:28210232
Gomez, Rapson; Watson, Shaun D
2017-01-01
For the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) together, this study examined support for a bifactor model, and also the internal consistency reliability and external validity of the factors in this model. Participants ( N = 526) were adults from the general community who completed the SPS and SIAS. Confirmatory factor analysis (CFA) of their ratings indicated good support for the bifactor model. For this model, the loadings for all but six items were higher on the general factor than the specific factors. The three positively worded items had negligible loadings on the general factor. The general factor explained most of the common variance in the SPS and SIAS, and demonstrated good model-based internal consistency reliability (omega hierarchical) and a strong association with fear of negative evaluation and extraversion. The practical implications of the findings for the utilization of the SPS and SIAS, and the theoretical and clinical implications for social anxiety are discussed.
Scale Effects Related to Small Physical Modelling of Overtopping of Rubble Mound Breakwaters
DEFF Research Database (Denmark)
Burcharth, Hans F.; Andersen, Thomas Lykke
2009-01-01
By comparison of overtopping discharges recorded in prototype and small scale physical models it was demonstrated in the EU-CLASH project that small scale tests significantly underestimate smaller discharges. Deviations in overtopping are due to model and scale effects. These effects are discusse...... armour on the upper part of the slope. This effect is believed to be the main reason for the found deviations between overtopping in prototype and small scale tests....
Training Systems Modelers through the Development of a Multi-scale Chagas Disease Risk Model
Hanley, J.; Stevens-Goodnight, S.; Kulkarni, S.; Bustamante, D.; Fytilis, N.; Goff, P.; Monroy, C.; Morrissey, L. A.; Orantes, L.; Stevens, L.; Dorn, P.; Lucero, D.; Rios, J.; Rizzo, D. M.
2012-12-01
The goal of our NSF-sponsored Division of Behavioral and Cognitive Sciences grant is to create a multidisciplinary approach to develop spatially explicit models of vector-borne disease risk using Chagas disease as our model. Chagas disease is a parasitic disease endemic to Latin America that afflicts an estimated 10 million people. The causative agent (Trypanosoma cruzi) is most commonly transmitted to humans by blood feeding triatomine insect vectors. Our objectives are: (1) advance knowledge on the multiple interacting factors affecting the transmission of Chagas disease, and (2) provide next generation genomic and spatial analysis tools applicable to the study of other vector-borne diseases worldwide. This funding is a collaborative effort between the RSENR (UVM), the School of Engineering (UVM), the Department of Biology (UVM), the Department of Biological Sciences (Loyola (New Orleans)) and the Laboratory of Applied Entomology and Parasitology (Universidad de San Carlos). Throughout this five-year study, multi-educational groups (i.e., high school, undergraduate, graduate, and postdoctoral) will be trained in systems modeling. This systems approach challenges students to incorporate environmental, social, and economic as well as technical aspects and enables modelers to simulate and visualize topics that would either be too expensive, complex or difficult to study directly (Yasar and Landau 2003). We launch this research by developing a set of multi-scale, epidemiological models of Chagas disease risk using STELLA® software v.9.1.3 (isee systems, inc., Lebanon, NH). We use this particular system dynamics software as a starting point because of its simple graphical user interface (e.g., behavior-over-time graphs, stock/flow diagrams, and causal loops). To date, high school and undergraduate students have created a set of multi-scale (i.e., homestead, village, and regional) disease models. Modeling the system at multiple spatial scales forces recognition that
Amir, Sahar Z.
2017-06-09
A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.
Amir, Sahar Z.; Chen, Huangxin; Sun, Shuyu
2017-01-01
A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.
Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J
2014-01-01
The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.
Oxley, Tim; Dore, Anthony J; ApSimon, Helen; Hall, Jane; Kryza, Maciej
2013-11-01
Integrated assessment modelling has evolved to support policy development in relation to air pollutants and greenhouse gases by providing integrated simulation tools able to produce quick and realistic representations of emission scenarios and their environmental impacts without the need to re-run complex atmospheric dispersion models. The UK Integrated Assessment Model (UKIAM) has been developed to investigate strategies for reducing UK emissions by bringing together information on projected UK emissions of SO2, NOx, NH3, PM10 and PM2.5, atmospheric dispersion, criteria for protection of ecosystems, urban air quality and human health, and data on potential abatement measures to reduce emissions, which may subsequently be linked to associated analyses of costs and benefits. We describe the multi-scale model structure ranging from continental to roadside, UK emission sources, atmospheric dispersion of emissions, implementation of abatement measures, integration with European-scale modelling, and environmental impacts. The model generates outputs from a national perspective which are used to evaluate alternative strategies in relation to emissions, deposition patterns, air quality metrics and ecosystem critical load exceedance. We present a selection of scenarios in relation to the 2020 Business-As-Usual projections and identify potential further reductions beyond those currently being planned. © 2013.
International Nuclear Information System (INIS)
Robinson, R.A.; Hadden, J.A.; Basham, S.J.
1978-01-01
Preliminary experimental studies of dynamic impact response of scale models of lead-shielded radioactive material shipping containers are presented. The objective of these studies is to provide DOE/ECT with a data base to allow the prediction of a rational margin of confidence in overviewing and assessing the adequacy of the safety and environmental control provided by these shipping containers. Replica scale modeling techniques were employed to predict full scale response with 1/8, 1/4, and 1/2 scale models of shipping containers that are used in the shipment of spent nuclear fuel and high level wastes. Free fall impact experiments are described for scale models of plain cylindrical stainless steel shells, stainless steel shells filled with lead, and replica scale models of radioactive material shipping containers. Dynamic induced strain and acceleration measurements were obtained at several critical locations on the models. The models were dropped from various heights, attitudes to the impact surface, with and without impact limiters and at uniform temperatures between -40 and 175 0 C. In addition, thermal expansion and thermal gradient induced strains were measured at -40 and 175 0 C. The frequency content of the strain signals and the effect of different drop pad compositions and stiffness were examined. Appropriate scale modeling laws were developed and scaling techniques were substantiated for predicting full scale response by comparison of dynamic strain data for 1/8, 1/4, and 1/2 scale models with stainless steel shells and lead shielding
Regional scale ecological risk assessment: using the relative risk model
National Research Council Canada - National Science Library
Landis, Wayne G
2005-01-01
...) in the performance of regional-scale ecological risk assessments. The initial chapters present the methodology and the critical nature of the interaction between risk assessors and decision makers...
Multi-scale Modeling of Dendritic Alloy Solidification
Dagner, Johannes
2009-01-01
Solidification of metallic melts is one of the most important processes in material science. The microstructure, which is formed during freezing, determines the mechanical properties of the final product largely. Many physical phenomena influence the solidification process and hence the resulting microstructure. One important parameter is influence of melt flow, which may modify heat and species transport on a large range of length- and time-scales. On the micro-scale, it influences the conce...
A model-based framework for incremental scale-up of wastewater treatment processes
DEFF Research Database (Denmark)
Mauricio Iglesias, Miguel; Sin, Gürkan
Scale-up is traditionally done following specific ratios or rules of thumb which do not lead to optimal results. We present a generic framework to assist in scale-up of wastewater treatment processes based on multiscale modelling, multiobjective optimisation and a validation of the model at the new...... large scale. The framework is illustrated by the scale-up of a complete autotropic nitrogen removal process. The model based multiobjective scaleup offers a promising improvement compared to the rule of thumbs based emprical scale up rules...
Scale effect challenges in urban hydrology highlighted with a distributed hydrological model
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2018-01-01
Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration by innovative methods of model resolution alteration based on the spatial data variability and scaling of flows in urban hydrology.
A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model
Tao, Wei-Kuo
2005-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data
Validating a continental-scale groundwater diffuse pollution model using regional datasets.
Ouedraogo, Issoufou; Defourny, Pierre; Vanclooster, Marnik
2017-12-11
In this study, we assess the validity of an African-scale groundwater pollution model for nitrates. In a previous study, we identified a statistical continental-scale groundwater pollution model for nitrate. The model was identified using a pan-African meta-analysis of available nitrate groundwater pollution studies. The model was implemented in both Random Forest (RF) and multiple regression formats. For both approaches, we collected as predictors a comprehensive GIS database of 13 spatial attributes, related to land use, soil type, hydrogeology, topography, climatology, region typology, nitrogen fertiliser application rate, and population density. In this paper, we validate the continental-scale model of groundwater contamination by using a nitrate measurement dataset from three African countries. We discuss the issue of data availability, and quality and scale issues, as challenges in validation. Notwithstanding that the modelling procedure exhibited very good success using a continental-scale dataset (e.g. R 2 = 0.97 in the RF format using a cross-validation approach), the continental-scale model could not be used without recalibration to predict nitrate pollution at the country scale using regional data. In addition, when recalibrating the model using country-scale datasets, the order of model exploratory factors changes. This suggests that the structure and the parameters of a statistical spatially distributed groundwater degradation model for the African continent are strongly scale dependent.
Ecosystem Demography Model: Scaling Vegetation Dynamics Across South America
National Aeronautics and Space Administration — This model product contains the source code for the Ecosystem Demography Model (ED version 1.0) as well as model input and output data for a portion of South America...
Ecosystem Demography Model: Scaling Vegetation Dynamics Across South America
National Aeronautics and Space Administration — ABSTRACT: This model product contains the source code for the Ecosystem Demography Model (ED version 1.0) as well as model input and output data for a portion of...
Multiphysics pore-scale model for the rehydration of porous foods
Sman, van der R.G.M.; Vergeldt, F.J.; As, van H.; Dalen, van G.; Voda, A.; Duynhoven, van J.P.M.
2014-01-01
In this paper we present a pore-scale model describing the multiphysics occurring during the rehydration of freeze-dried vegetables. This pore-scale model is part of a multiscale simulation model, which should explain the effect of microstructure and pre-treatments on the rehydration rate.
Directory of Open Access Journals (Sweden)
A. Budishchev
2014-09-01
Full Text Available Most plot-scale methane emission models – of which many have been developed in the recent past – are validated using data collected with the closed-chamber technique. This method, however, suffers from a low spatial representativeness and a poor temporal resolution. Also, during a chamber-flux measurement the air within a chamber is separated from the ambient atmosphere, which negates the influence of wind on emissions. Additionally, some methane models are validated by upscaling fluxes based on the area-weighted averages of modelled fluxes, and by comparing those to the eddy covariance (EC flux. This technique is rather inaccurate, as the area of upscaling might be different from the EC tower footprint, therefore introducing significant mismatch. In this study, we present an approach to validate plot-scale methane models with EC observations using the footprint-weighted average method. Our results show that the fluxes obtained by the footprint-weighted average method are of the same magnitude as the EC flux. More importantly, the temporal dynamics of the EC flux on a daily timescale are also captured (r2 = 0.7. In contrast, using the area-weighted average method yielded a low (r2 = 0.14 correlation with the EC measurements. This shows that the footprint-weighted average method is preferable when validating methane emission models with EC fluxes for areas with a heterogeneous and irregular vegetation pattern.
A multi-scale modeling of surface effect via the modified boundary Cauchy-Born model
Energy Technology Data Exchange (ETDEWEB)
Khoei, A.R., E-mail: arkhoei@sharif.edu; Aramoon, A.
2012-10-01
In this paper, a new multi-scale approach is presented based on the modified boundary Cauchy-Born (MBCB) technique to model the surface effects of nano-structures. The salient point of the MBCB model is the definition of radial quadrature used in the surface elements which is an indicator of material behavior. The characteristics of quadrature are derived by interpolating data from atoms laid in a circular support around the quadrature, in a least-square scene. The total-Lagrangian formulation is derived for the equivalent continua by employing the Cauchy-Born hypothesis for calculating the strain energy density function of the continua. The numerical results of the proposed method are compared with direct atomistic and finite element simulation results to indicate that the proposed technique provides promising results for modeling surface effects of nano-structures. - Highlights: Black-Right-Pointing-Pointer A multi-scale approach is presented to model the surface effects in nano-structures. Black-Right-Pointing-Pointer The total-Lagrangian formulation is derived by employing the Cauchy-Born hypothesis. Black-Right-Pointing-Pointer The radial quadrature is used to model the material behavior in surface elements. Black-Right-Pointing-Pointer The quadrature characteristics are derived using the data at the atomistic level.
A Pareto scale-inflated outlier model and its Bayesian analysis
Scollnik, David P. M.
2016-01-01
This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three wor...
DEFF Research Database (Denmark)
Garcia, Ada V.; Thomsen, Kaj; Stenby, Erling Halfdan
2005-01-01
Pressure parameters are added to the Extended UNIQUAC model presented by Thomsen and Rasmussen (1999). The improved model has been used for correlation and prediction of solid-liquid equilibrium (SLE) of scaling minerals (CaSO4, CaSO4·2H2O, BaSO4 and SrSO4) at temperatures up to 300°C and pressur...
Modeling heat efficiency, flow and scale-up in the corotating disc scraped surface heat exchanger
DEFF Research Database (Denmark)
Friis, Alan; Szabo, Peter; Karlson, Torben
2002-01-01
A comparison of two different scale corotating disc scraped surface heat exchangers (CDHE) was performed experimentally. The findings were compared to predictions from a finite element model. We find that the model predicts well the flow pattern of the two CDHE's investigated. The heat transfer...... performance predicted by the model agrees well with experimental observations for the laboratory scale CDHE whereas the overall heat transfer in the scaled-up version was not in equally good agreement. The lack of the model to predict the heat transfer performance in scale-up leads us to identify the key...
Incorporating Protein Biosynthesis into the Saccharomyces cerevisiae Genome-scale Metabolic Model
DEFF Research Database (Denmark)
Olivares Hernandez, Roberto
Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been construc......Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been...
Uncertainty analysis for a field-scale P loss model
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study we assessed the effect of model input error on predic...
Scale changes in air quality modelling and assessment of associated uncertainties
International Nuclear Information System (INIS)
Korsakissok, Irene
2009-01-01
After an introduction of issues related to a scale change in the field of air quality (existing scales for emissions, transport, turbulence and loss processes, hierarchy of data and models, methods of scale change), the author first presents Gaussian models which have been implemented within the Polyphemus modelling platform. These models are assessed by comparison with experimental observations and with other commonly used Gaussian models. The second part reports the coupling of the puff-based Gaussian model with the Eulerian Polair3D model for the sub-mesh processing of point sources. This coupling is assessed at the continental scale for a passive tracer, and at the regional scale for photochemistry. Different statistical methods are assessed
Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation
DEFF Research Database (Denmark)
Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.
2015-01-01
This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...
Modelling the impact of implementing Water Sensitive Urban Design on at a catchment scale
DEFF Research Database (Denmark)
Locatelli, Luca; Gabriel, S.; Bockhorn, Britta
Stormwater management using Water Sensitive Urban Design (WSUD) is expected to be part of future drainage systems. This project aimed to develop a set of hydraulic models of the Harrestrup Å catchment (close to Copenhagen) in order to demonstrate the importance of modeling WSUDs at different scales......, ranging from models of an individual soakaway up to models of a large urban catchment. The models were developed in Mike Urban with a new integrated soakaway model. A small-scale individual soakaway model was used to determine appropriate initial conditions for soakway models. This model was applied...
International Nuclear Information System (INIS)
Vold, Erik L.; Scannapieco, Tony J.
2007-01-01
A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.
Relevant data about subsurface water flow and solute transport at relatively large scales that are of interest to the public are inherently laborious and in most cases simply impossible to obtain. Upscaling in which fine-scale models and data are used to predict changes at the coarser scales is the...
Field scale heterogeneity of redox conditions in till-upscaling to a catchment nitrate model
DEFF Research Database (Denmark)
Hansen, J.R.; Erntsen, V.; Refsgaard, J.C.
2008-01-01
Point scale studies in different settings of glacial geology show a large local variation of redox conditions. There is a need to develop an upscaling methodology for catchment scale models. This paper describes a study of field-scale heterogeneity of redox-interfaces in a till aquitard within an...
Calibration of the model SMART2 in the Netherlands, using data available at the European scale
Mol-Dijkstra, J.P.; Kros, J.
1999-01-01
The soil acidification model SMART2 has been developed for application on a national to a continental scale. In this study SMART2 is applied at the European scale, which means that SMART2 was applied to the Netherlands with data that are available at the European scale. In order to calibrate SMART2,
Identification of low order models for large scale processes
Wattamwar, S.K.
2010-01-01
Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand
Using Genome-scale Models to Predict Biological Capabilities
DEFF Research Database (Denmark)
O’Brien, Edward J.; Monk, Jonathan M.; Palsson, Bernhard O.
2015-01-01
Constraint-based reconstruction and analysis (COBRA) methods at the genome scale have been under development since the first whole-genome sequences appeared in the mid-1990s. A few years ago, this approach began to demonstrate the ability to predict a range of cellular functions, including cellul...
Modeling and Simulation in Tribology Across Scales : an Overview
Vakis, Antonis I.; Yastrebov, V.A.; Scheibert, J.; Nicola, L; Dini, D.; Minfray, C.; Almqvist, A.; Paggi, M.; Lee, S.; Limbert, G.; Molinari, J.F.; Anciaux, G.; Echeverri Restrepo, S.; Papangelo, A.; Cammarata, A.; Nicolini, P.; Aghababaei, R.; Putignano, C.; Stupkiewicz, S.; Lengiewicz, J.; Costagliola, G.; Bosia, F.; Guarino, R.; Pugno, N.M.; Carbone, G.; Müser, Martin H.; Ciavarella, M.
2018-01-01
This review summarizes recent advances in the area of tribology based on the outcome of a Lorentz Center workshop surveying various physical, chemical and mechanical phenomena across scales. Among the main themes discussed were those of rough surface representations, the breakdown of continuum
Multi-scale modeling strategies in materials science
Indian Academy of Sciences (India)
The problem of prediction of finite temperature properties of materials poses great computational challenges. The computational treatment of the multitude of length and time scales involved in determining macroscopic properties has been attempted by several workers with varying degrees of success. This paper will review ...
A feasibility and implementation model of small-scale hydropower ...
African Journals Online (AJOL)
Large numbers of households and communities will not be connected to the national electricity grid for the foreseeable future due to high cost of transmission and distribution systems to remote communities and the relatively low electricity demand within rural communities. Small-scale hydropower used to play a very ...
Metric-Asaurus: Conceptualizing Scale Using Dinosaur Models
Gloyna, Lisa; West, Sandra; Martin, Patti; Browning, Sandra
2010-01-01
For middle school students who have seen only pictures of dinosaurs in books, in the movies, or on the internet, trying to comprehend the size of these gargantuan animals can be difficult. This lesson provides a way for students to visualize changing scale through studying extinct organisms and to gain a deeper understanding of the history of the…
Directory of Open Access Journals (Sweden)
Lorenzo L. Pesce
2013-01-01
Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.
Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim
2013-01-01
Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.
Digital terrain model generalization incorporating scale, semantic and cognitive constraints
Partsinevelos, Panagiotis; Papadogiorgaki, Maria
2014-05-01
Cartographic generalization is a well-known process accommodating spatial data compression, visualization and comprehension under various scales. In the last few years, there are several international attempts to construct tangible GIS systems, forming real 3D surfaces using a vast number of mechanical parts along a matrix formation (i.e., bars, pistons, vacuums). Usually, moving bars upon a structured grid push a stretching membrane resulting in a smooth visualization for a given surface. Most of these attempts suffer either in their cost, accuracy, resolution and/or speed. Under this perspective, the present study proposes a surface generalization process that incorporates intrinsic constrains of tangible GIS systems including robotic-motor movement and surface stretching limitations. The main objective is to provide optimized visualizations of 3D digital terrain models with minimum loss of information. That is, to minimize the number of pixels in a raster dataset used to define a DTM, while reserving the surface information. This neighborhood type of pixel relations adheres to the basics of Self Organizing Map (SOM) artificial neural networks, which are often used for information abstraction since they are indicative of intrinsic statistical features contained in the input patterns and provide concise and characteristic representations. Nevertheless, SOM remains more like a black box procedure not capable to cope with possible particularities and semantics of the application at hand. E.g. for coastal monitoring applications, the near - coast areas, surrounding mountains and lakes are more important than other features and generalization should be "biased"-stratified to fulfill this requirement. Moreover, according to the application objectives, we extend the SOM algorithm to incorporate special types of information generalization by differentiating the underlying strategy based on topologic information of the objects included in the application. The final
Development of a three-dimensional local scale atmospheric model with turbulence closure model
International Nuclear Information System (INIS)
Yamazawa, Hiromi
1989-05-01
Through the study to improve SPEEDI's capability, a three-dimensional numerical atmospheric model PHYSIC (Prognostic HYdroStatic model Including turbulence Closure model) was developed to apply it to the transport and diffusion evaluation over complex terrains. The detailed description of the atmospheric model was given. This model consists of five prognostic equations; the momentum equations of horizontal components with the so-called Boussinesq and hydrostatic assumptions, the conservation equations of heat, turbulence kinetic energy and turbulence length scale. The coordinate system used is the terrain following z * coordinate system which allows the existence of complex terrain. The minute formula of the turbulence closure calculation, the surface layer process, the ground surface heat budget, and the atmospheric and solar radiation were also presented. The time integration method used in this model is the Alternating Direction Implicit (A.D.I.) method with a vertically and horizontally staggered grid system. The memory storage needed to execute this model with 31 x 31 x 16 grid points, five layers in soil and double precision variables is about 5.3 MBytes. The CPU time is about 2.2 x 10 -5 s per one step per one grid point with a vector processor FACOM VP-100. (author)
A novel evolving scale-free model with tunable attractiveness
International Nuclear Information System (INIS)
Xuan, Liu; Tian-Qi, Liu; Xing-Yuan, Li; Hao, Wang
2010-01-01
In this paper, a new evolving model with tunable attractiveness is presented. Based on the Barabasi–Albert (BA) model, we introduce the attractiveness of node which can change with node degree. Using the mean-field theory, we obtain the analytical expression of power-law degree distribution with the exponent γ in (3, ∞). The new model is more homogeneous and has a lower clustering coefficient and bigger average path length than the BA model. (general)
Large scale stochastic spatio-temporal modelling with PCRaster
Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.
2013-01-01
PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model
The Multi-Scale Model Approach to Thermohydrology at Yucca Mountain
International Nuclear Information System (INIS)
Glascoe, L; Buscheck, T A; Gansemer, J; Sun, Y
2002-01-01
The Multi-Scale Thermo-Hydrologic (MSTH) process model is a modeling abstraction of them1 hydrology (TH) of the potential Yucca Mountain repository at multiple spatial scales. The MSTH model as described herein was used for the Supplemental Science and Performance Analyses (BSC, 2001) and is documented in detail in CRWMS M and O (2000) and Glascoe et al. (2002). The model has been validated to a nested grid model in Buscheck et al. (In Review). The MSTH approach is necessary for modeling thermal hydrology at Yucca Mountain for two reasons: (1) varying levels of detail are necessary at different spatial scales to capture important TH processes and (2) a fully-coupled TH model of the repository which includes the necessary spatial detail is computationally prohibitive. The MSTH model consists of six ''submodels'' which are combined in a manner to reduce the complexity of modeling where appropriate. The coupling of these models allows for appropriate consideration of mountain-scale thermal hydrology along with the thermal hydrology of drift-scale discrete waste packages of varying heat load. Two stages are involved in the MSTH approach, first, the execution of submodels, and second, the assembly of submodels using the Multi-scale Thermohydrology Abstraction Code (MSTHAC). MSTHAC assembles the submodels in a five-step process culminating in the TH model output of discrete waste packages including a mountain-scale influence
A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes
Tao, W. K.
2017-12-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.
The mechanical properties modeling of nano-scale materials by molecular dynamics
Yuan, C.; Driel, W.D. van; Poelma, R.; Zhang, G.Q.
2012-01-01
We propose a molecular modeling strategy which is capable of mod-eling the mechanical properties on nano-scale low-dielectric (low-k) materials. Such modeling strategy has been also validated by the bulking force of carbon nano tube (CNT). This modeling framework consists of model generation method,
DEFF Research Database (Denmark)
Bende-Michl, Ulrike; Volk, Martin; Harmel, Daren
2011-01-01
This short communication paper presents recommendations for developing scale-appropriate monitoring and modelling strategies to assist decision making in natural resource management (NRM). These ideas presented here were discussed in the session (S5) ‘Monitoring strategies and scale...... and communication between researcher and model developer on the one side, and natural resource managers and the model users on the other side to increase knowledge in: 1) the limitations and uncertainties of current monitoring and modelling strategies, 2) scale-dependent linkages between monitoring and modelling...
Scaling functions for the O(4) model in d=3 dimensions
International Nuclear Information System (INIS)
Braun, Jens; Klein, Bertram
2008-01-01
A nonperturbative renormalization group approach is used to calculate scaling functions for an O(4) model in d=3 dimensions in the presence of an external symmetry-breaking field. These scaling functions are important for the analysis of critical behavior in the O(4) universality class. For example, the finite-temperature phase transition in QCD with two flavors is expected to fall into this class. Critical exponents are calculated in local-potential approximation. Parametrizations of the scaling functions for the order parameter and for the longitudinal susceptibility are given. Relations from universal scaling arguments between these scaling functions are investigated and confirmed. The expected asymptotic behavior of the scaling functions predicted by Griffiths is observed. Corrections to the scaling behavior at large values of the external field are studied qualitatively. These scaling corrections can become large, which might have implications for the scaling analysis of lattice QCD results.
International Nuclear Information System (INIS)
Ijiri, Yuji; Sawada, Atsushi; Uchida, Masahiro; Ishiguro, Katsuhiko; Umeki, Hiroyuki; Sakamoto, Kazuhiko; Ohnishi, Yuzo
2001-01-01
It is important to take into account scale effects on fracture geometry if the modeling scale is much larger than the in-situ observation scale. The scale effect on fracture trace length, which is the most scale dependent parameter, is investigated using fracture maps obtained at various scales in tunnel and dam sites. We found that the distribution of fracture trace length follows negative power law distribution in regardless of locations and rock types. The hydraulic characteristics of fractured rock is also investigated by numerical analysis of discrete fracture network (DFN) model where power law distribution of fracture radius is adopted. We found that as the exponent of power law distribution become larger, the hydraulic conductivity of DFN model increases and the travel time in DFN model decreases. (author)
The Design of a Fire Source in Scale-Model Experiments with Smoke Ventilation
DEFF Research Database (Denmark)
Nielsen, Peter Vilhelm; Brohus, Henrik; la Cour-Harbo, H.
2004-01-01
The paper describes the design of a fire and a smoke source for scale-model experiments with smoke ventilation. It is only possible to work with scale-model experiments where the Reynolds number is reduced compared to full scale, and it is demonstrated that special attention to the fire source...... (heat and smoke source) may improve the possibility of obtaining Reynolds number independent solutions with a fully developed flow. The paper shows scale-model experiments for the Ofenegg tunnel case. Design of a fire source for experiments with smoke ventilation in a large room and smoke movement...
A multi-scale energy demand model suggests sharing market risks with intelligent energy cooperatives
G. Methenitis (Georgios); M. Kaisers (Michael); J.A. La Poutré (Han)
2015-01-01
textabstractIn this paper, we propose a multi-scale model of energy demand that is consistent with observations at a macro scale, in our use-case standard load profiles for (residential) electric loads. We employ the model to study incentives to assume the risk of volatile market prices for
Modelling cloud effects on ozone on a regional scale : A case study
Matthijsen, J.; Builtjes, P.J.H.; Meijer, E.W.; Boersen, G.
1997-01-01
We have investigated the influence of clouds on ozone on a regional scale (Europe) with a regional scale photochemical dispersion model (LOTOS). The LOTOS-model calculates ozone and other photo-oxidant concentrations in the lowest three km of the troposphere, using actual meteorologic data and
Modeling and Validation across Scales: Parametrizing the effect of the forested landscape
DEFF Research Database (Denmark)
Dellwik, Ebba; Badger, Merete; Angelou, Nikolas
be transferred into a parametrization of forests in wind models. The presentation covers three scales: the single tree, the forest edges and clearings, and the large-scale forested landscape in which the forest effects are parameterized with a roughness length. Flow modeling results and validation against...
International Nuclear Information System (INIS)
Jackson, V.L.
2011-01-01
The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
The three-point function as a probe of models for large-scale structure
International Nuclear Information System (INIS)
Frieman, J.A.; Gaztanaga, E.
1993-01-01
The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales
Modeling on the grand scale: LANDFIRE lessons learned
Kori Blankenship; Jim Smith; Randy Swaty; Ayn J. Shlisky; Jeannie Patton; Sarah. Hagen
2012-01-01
Between 2004 and 2009, the LANDFIRE project facilitated the creation of approximately 1,200 unique state-andtransition models (STMs) for all major ecosystems in the United States. The primary goal of the modeling effort was to create a consistent and comprehensive set of STMs describing reference conditions and to inform the mapping of a subset of LANDFIREâs spatial...
Integrated flow and temperature modeling at the catchment scale
DEFF Research Database (Denmark)
Loinaz, Maria Christina; Davidsen, Hasse Kampp; Butts, Michael
2013-01-01
–groundwater dynamics affect stream temperature. A coupled surface water–groundwater and temperature model has therefore been developed to quantify the impacts of land management and water use on stream flow and temperatures. The model is applied to the simulation of stream temperature levels in a spring-fed stream...
Large scale experiments as a tool for numerical model development
DEFF Research Database (Denmark)
Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper
2003-01-01
Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...
A model for chlorophyll fluorescence and photosynthesis at leaf scale
Tol, van der C.; Verhoef, W.; Rosema, A.
2009-01-01
This paper presents a leaf biochemical model for steady-state chlorophyll fluorescence and photosynthesis of C3 and C4 vegetation. The model is a tool to study the relationship between passively measured steady-state chlorophyll fluorescence and actual photosynthesis, and its evolution during the
Misspecified poisson regression models for large-scale registry data
DEFF Research Database (Denmark)
Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.
2016-01-01
working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...
A transport model for Alcator scaling in tokamaks
International Nuclear Information System (INIS)
Ohkawa, T.
1978-01-01
A theoretical model is proposed to explain the tokamak energy confinement time. With no adjustable numerical coefficients, the model predicts experimentally observed values to within a level of uncertainty consistent with the intrinsic spread of the experimental data and the necessity of calculating the confinement time without precise knowledge of the temperature profile. (Auth.)
Probabilistic models of population evolution scaling limits, genealogies and interactions
Pardoux, Étienne
2016-01-01
This expository book presents the mathematical description of evolutionary models of populations subject to interactions (e.g. competition) within the population. The author includes both models of finite populations, and limiting models as the size of the population tends to infinity. The size of the population is described as a random function of time and of the initial population (the ancestors at time 0). The genealogical tree of such a population is given. Most models imply that the population is bound to go extinct in finite time. It is explained when the interaction is strong enough so that the extinction time remains finite, when the ancestral population at time 0 goes to infinity. The material could be used for teaching stochastic processes, together with their applications. Étienne Pardoux is Professor at Aix-Marseille University, working in the field of Stochastic Analysis, stochastic partial differential equations, and probabilistic models in evolutionary biology and population genetics. He obtai...
Multi-scale modeling of urban air pollution: development of a Street-in-Grid model
Kim, Youngseob; Wu, You; Seigneur, Christian; Roustan, Yelva
2016-04-01
A new multi-scale model of urban air pollution is presented. This model combines a chemical-transport model (CTM) that includes a comprehensive treatment of atmospheric chemistry and transport at spatial scales greater than 1 km and a street-network model that describes the atmospheric concentrations of pollutants in an urban street network. The street-network model is based on the general formulation of the SIRANE model and consists of two main components: a street-canyon component and a street-intersection component. The street-canyon component calculates the mass transfer velocity at the top of the street canyon (roof top) and the mean wind velocity within the street canyon. The estimation of the mass transfer velocity depends on the intensity of the standard deviation of the vertical velocity at roof top. The effect of various formulations of this mass transfer velocity on the pollutant transport at roof-top level is examined. The street-intersection component calculates the mass transfer from a given street to other streets across the intersection. These mass transfer rates among the streets are calculated using the mean wind velocity calculated for each street and are balanced so that the total incoming flow rate is equal to the total outgoing flow rate from the intersection including the flow between the intersection and the overlying atmosphere at roof top. In the default option, the Leighton photostationary cycle among ozone (O3) and nitrogen oxides (NO and NO2) is used to represent the chemical reactions within the street network. However, the influence of volatile organic compounds (VOC) on the pollutant concentrations increases when the nitrogen oxides (NOx) concentrations are low. To account for the possible VOC influence on street-canyon chemistry, the CB05 chemical kinetic mechanism, which includes 35 VOC model species, is implemented in this street-network model. A sensitivity study is conducted to assess the uncertainties associated with the use of
Multi-scale modeling of spin transport in organic semiconductors
Hemmatiyan, Shayan; Souza, Amaury; Kordt, Pascal; McNellis, Erik; Andrienko, Denis; Sinova, Jairo
In this work, we present our theoretical framework to simulate simultaneously spin and charge transport in amorphous organic semiconductors. By combining several techniques e.g. molecular dynamics, density functional theory and kinetic Monte Carlo, we are be able to study spin transport in the presence of anisotropy, thermal effects, magnetic and electric field effects in a realistic morphologies of amorphous organic systems. We apply our multi-scale approach to investigate the spin transport in amorphous Alq3 (Tris(8-hydroxyquinolinato)aluminum) and address the underlying spin relaxation mechanism in this system as a function of temperature, bias voltage, magnetic field and sample thickness.
Coarse-graining to the meso and continuum scales with molecular-dynamics-like models
Plimpton, Steve
Many engineering-scale problems that industry or the national labs try to address with particle-based simulations occur at length and time scales well beyond the most optimistic hopes of traditional coarse-graining methods for molecular dynamics (MD), which typically start at the atomic scale and build upward. However classical MD can be viewed as an engine for simulating particles at literally any length or time scale, depending on the models used for individual particles and their interactions. To illustrate I'll highlight several coarse-grained (CG) materials models, some of which are likely familiar to molecular-scale modelers, but others probably not. These include models for water droplet freezing on surfaces, dissipative particle dynamics (DPD) models of explosives where particles have internal state, CG models of nano or colloidal particles in solution, models for aspherical particles, Peridynamics models for fracture, and models of granular materials at the scale of industrial processing. All of these can be implemented as MD-style models for either soft or hard materials; in fact they are all part of our LAMMPS MD package, added either by our group or contributed by collaborators. Unlike most all-atom MD simulations, CG simulations at these scales often involve highly non-uniform particle densities. So I'll also discuss a load-balancing method we've implemented for these kinds of models, which can improve parallel efficiencies. From the physics point-of-view, these models may be viewed as non-traditional or ad hoc. But because they are MD-style simulations, there's an opportunity for physicists to add statistical mechanics rigor to individual models. Or, in keeping with a theme of this session, to devise methods that more accurately bridge models from one scale to the next.
Modeling Physical Processes at Galactic Scales and Above
Energy Technology Data Exchange (ETDEWEB)
Gnedin, Nickolay Y. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
2014-12-16
What should these lectures be? The subject is so broad that many books can be written about it. I decided to prepare these lectures as if I were teaching my own graduate student. Given my research interests, I selected what the student would need to know to be able to discuss science with me and to work on joint research projects. So, the story presented below is both personal and incomplete, but it does cover several subjects that are poorly represented in the existing textbooks (if at all). Some of topics I focus on below are closely connected, others are disjoint, some are just side detours on specific technical questions. There is an overlapping theme, however. Our goal is to follow the cosmic gas from large scales, low densities, (relatively) simple physics to progressively smaller scales, higher densities, closer relation to galaxies, and more complex and uncertain physics. We follow a "yellow brick road" from the gas well beyond any galaxy confines to the actual sites of star formation and stellar feedback. On the way we will stop at some places for a tour and run without looking back through some others. So, the road will be uneven. The organization of the material is as follows: physics of the intergalactic medium, from intergalactic medium to circumgalactic medium, interstellar medium: gas in galaxies, star formation, and stellar feedback.
Large Scale Community Detection Using a Small World Model
Directory of Open Access Journals (Sweden)
Ranjan Kumar Behera
2017-11-01
Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.
Large-Scale Topic Detection and Language Model Adaptation
National Research Council Canada - National Science Library
Seymore, Kristie
1997-01-01
.... We have developed a language model adaptation scheme that takes apiece of text, chooses the most similar topic clusters from a set of over 5000 elemental topics, and uses topic specific language...
A Large Scale, High Resolution Agent-Based Insurgency Model
2013-09-30
CUDA) is NVIDIA Corporation’s software development model for General Purpose Programming on Graphics Processing Units (GPGPU) ( NVIDIA Corporation ...Conference. Argonne National Laboratory, Argonne, IL, October, 2005. NVIDIA Corporation . NVIDIA CUDA Programming Guide 2.0 [Online]. NVIDIA Corporation
Large scale Bayesian nuclear data evaluation with consistent model defects
International Nuclear Information System (INIS)
Schnabel, G
2015-01-01
The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the
Numerical modeling of aluminium foam on two scales
Czech Academy of Sciences Publication Activity Database
Němeček, J.; Denk, F.; Zlámal, Petr
Roč. 267, September (2015), s. 506-516 ISSN 0096-3003 R&D Projects: GA ČR(CZ) GAP105/12/0824 Institutional support: RVO:68378297 Keywords : closed-cell aluminium foam * Alporas * multiscale modeling * homogenization * FFT * finite element modeling Subject RIV: JI - Composite Materials Impact factor: 1.345, year: 2015 http://www.sciencedirect.com/science/article/pii/S0096300315001162
Modeling the multi-scale mechanisms of macromolecular resource allocation
DEFF Research Database (Denmark)
Yang, Laurence; Yurkovich, James T; King, Zachary A
2018-01-01
As microbes face changing environments, they dynamically allocate macromolecular resources to produce a particular phenotypic state. Broad 'omics' data sets have revealed several interesting phenomena regarding how the proteome is allocated under differing conditions, but the functional consequen...... and detail how mathematical models have aided in our understanding of these processes. Ultimately, such modeling efforts have helped elucidate the principles of proteome allocation and hold promise for further discovery....
Density-temperature scaling of the fragility in a model glass-former
DEFF Research Database (Denmark)
Schrøder, Thomas; Sengupta, Shiladitya; Sastry, Srikanth
2013-01-01
. Such a scaling, referred to as density-temperature (DT) scaling, is exact for liquids with inverse power law (IPL) interactions but has also been found to be approximately valid in many non-IPL liquids. We have analyzed the consequences of DT scaling on the density dependence of the fragility in a model glass......Dynamical quantities e.g. diffusivity and relaxation time for some glass-formers may depend on density and temperature through a specific combination, rather than independently, allowing the representation of data over ranges of density and temperature as a function of a single scaling variable......-former. We find the density dependence of kinetic fragility to be weak, and show that it can be understood in terms of DT scaling and deviations of DT scaling at low densities. We also show that the Adam-Gibbs relation exhibits DT scaling and the scaling exponent computed from the density dependence...
Linking Fine-Scale Observations and Model Output with Imagery at Multiple Scales
Sadler, J.; Walthall, C. L.
2014-12-01
The development and implementation of a system for seasonal worldwide agricultural yield estimates is underway with the international Group on Earth Observations GeoGLAM project. GeoGLAM includes a research component to continually improve and validate its algorithms. There is a history of field measurement campaigns going back decades to draw upon for ways of linking surface measurements and model results with satellite observations. Ground-based, in-situ measurements collected by interdisciplinary teams include yields, model inputs and factors affecting scene radiation. Data that is comparable across space and time with careful attention to calibration is essential for the development and validation of agricultural applications of remote sensing. Data management to ensure stewardship, availability and accessibility of the data are best accomplished when considered an integral part of the research. The expense and logistical challenges of field measurement campaigns can be cost-prohibitive and because of short funding cycles for research, access to consistent, stable study sites can be lost. The use of a dedicated staff for baseline data needed by multiple investigators, and conducting measurement campaigns using existing measurement networks such as the USDA Long Term Agroecosystem Research network can fulfill these needs and ensure long-term access to study sites.
Application of Hierarchy Theory to Cross-Scale Hydrologic Modeling of Nutrient Loads
We describe a model called Regional Hydrologic Modeling for Environmental Evaluation 16 (RHyME2) for quantifying annual nutrient loads in stream networks and watersheds. RHyME2 is 17 a cross-scale statistical and process-based water-quality model. The model ...
An empirical velocity scale relation for modelling a design of large mesh pelagic trawl
Ferro, R.S.T.; Marlen, van B.; Hansen, K.E.
1996-01-01
Physical models of fishing nets are used in fishing technology research at scales of 1:40 or smaller. As with all modelling involving fluid flow, a set of rules is required to determine the geometry of the model and its velocity relative to the water. Appropriate rules ensure that the model is
Zheng, Y.; Wu, B.; Wu, X.
2015-12-01
Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real
Validity of thermally-driven small-scale ventilated filling box models
Partridge, Jamie L.; Linden, P. F.
2013-11-01
The majority of previous work studying building ventilation flows at laboratory scale have used saline plumes in water. The production of buoyancy forces using salinity variations in water allows dynamic similarity between the small-scale models and the full-scale flows. However, in some situations, such as including the effects of non-adiabatic boundaries, the use of a thermal plume is desirable. The efficacy of using temperature differences to produce buoyancy-driven flows representing natural ventilation of a building in a small-scale model is examined here, with comparison between previous theoretical and new, heat-based, experiments.
Advanced modeling to accelerate the scale up of carbon capture technologies
Energy Technology Data Exchange (ETDEWEB)
Miller, David C.; Sun, XIN; Storlie, Curtis B.; Bhattacharyya, Debangsu
2015-06-01
In order to help meet the goals of the DOE carbon capture program, the Carbon Capture Simulation Initiative (CCSI) was launched in early 2011 to develop, demonstrate, and deploy advanced computational tools and validated multi-scale models to reduce the time required to develop and scale-up new carbon capture technologies. This article focuses on essential elements related to the development and validation of multi-scale models in order to help minimize risk and maximize learning as new technologies progress from pilot to demonstration scale.
Genome Scale Modeling in Systems Biology: Algorithms and Resources
Najafi, Ali; Bidkhori, Gholamreza; Bozorgmehr, Joseph H.; Koch, Ina; Masoudi-Nejad, Ali
2014-01-01
In recent years, in silico studies and trial simulations have complemented experimental procedures. A model is a description of a system, and a system is any collection of interrelated objects; an object, moreover, is some elemental unit upon which observations can be made but whose internal structure either does not exist or is ignored. Therefore, any network analysis approach is critical for successful quantitative modeling of biological systems. This review highlights some of most popular and important modeling algorithms, tools, and emerging standards for representing, simulating and analyzing cellular networks in five sections. Also, we try to show these concepts by means of simple example and proper images and graphs. Overall, systems biology aims for a holistic description and understanding of biological processes by an integration of analytical experimental approaches along with synthetic computational models. In fact, biological networks have been developed as a platform for integrating information from high to low-throughput experiments for the analysis of biological systems. We provide an overview of all processes used in modeling and simulating biological networks in such a way that they can become easily understandable for researchers with both biological and mathematical backgrounds. Consequently, given the complexity of generated experimental data and cellular networks, it is no surprise that researchers have turned to computer simulation and the development of more theory-based approaches to augment and assist in the development of a fully quantitative understanding of cellular dynamics. PMID:24822031
Evaluation of scalar mixing and time scale models in PDF simulations of a turbulent premixed flame
Energy Technology Data Exchange (ETDEWEB)
Stoellinger, Michael; Heinz, Stefan [Department of Mathematics, University of Wyoming, Laramie, WY (United States)
2010-09-15
Numerical simulation results obtained with a transported scalar probability density function (PDF) method are presented for a piloted turbulent premixed flame. The accuracy of the PDF method depends on the scalar mixing model and the scalar time scale model. Three widely used scalar mixing models are evaluated: the interaction by exchange with the mean (IEM) model, the modified Curl's coalescence/dispersion (CD) model and the Euclidean minimum spanning tree (EMST) model. The three scalar mixing models are combined with a simple model for the scalar time scale which assumes a constant C{sub {phi}}=12 value. A comparison of the simulation results with available measurements shows that only the EMST model calculates accurately the mean and variance of the reaction progress variable. An evaluation of the structure of the PDF's of the reaction progress variable predicted by the three scalar mixing models confirms this conclusion: the IEM and CD models predict an unrealistic shape of the PDF. Simulations using various C{sub {phi}} values ranging from 2 to 50 combined with the three scalar mixing models have been performed. The observed deficiencies of the IEM and CD models persisted for all C{sub {phi}} values considered. The value C{sub {phi}}=12 combined with the EMST model was found to be an optimal choice. To avoid the ad hoc choice for C{sub {phi}}, more sophisticated models for the scalar time scale have been used in simulations using the EMST model. A new model for the scalar time scale which is based on a linear blending between a model for flamelet combustion and a model for distributed combustion is developed. The new model has proven to be very promising as a scalar time scale model which can be applied from flamelet to distributed combustion. (author)
Drift Scale Modeling: Study of Unsaturated Flow into a Drift Using a Stochastic Continuum Model
International Nuclear Information System (INIS)
Birkholzer, J.T.; Tsang, C.F.; Tsang, Y.W.; Wang, J.S
1996-01-01
Unsaturated flow in heterogeneous fractured porous rock was simulated using a stochastic continuum model (SCM). In this model, both the more conductive fractures and the less permeable matrix are generated within the framework of a single continuum stochastic approach, based on non-parametric indicator statistics. High-permeable fracture zones are distinguished from low-permeable matrix zones in that they have assigned a long range correlation structure in prescribed directions. The SCM was applied to study small-scale flow in the vicinity of an access tunnel, which is currently being drilled in the unsaturated fractured tuff formations at Yucca Mountain, Nevada. Extensive underground testing is underway in this tunnel to investigate the suitability of Yucca Mountain as an underground nuclear waste repository. Different flow scenarios were studied in the present paper, considering the flow conditions before and after the tunnel emplacement, and assuming steady-state net infiltration as well as episodic pulse infiltration. Although the capability of the stochastic continuum model has not yet been fully explored, it has been demonstrated that the SCM is a good alternative model feasible of describing heterogeneous flow processes in unsaturated fractured tuff at Yucca Mountain
Modelling particle - particle interaction at the micro scale
Swedlow, J. L.
1983-03-01
In high-strength alloys, microstructure can influence toughness in a manner not yet fully quantified. Computational mechanics offers a tool whereby the events leading to fracture may be simulated, but the success of such an enterprise depends heavily upon the quality of the model employed. This report outlines a sequence of events thought to precede ductile fracture and presents a finite element model designed to capture the main events. The model is considered to be an improvement over an earlier one, and data are presented to support this conclusion. Work of this type requires a fine degree of resolution which normally will entail very large, detailed finite element maps. Such map sizes could easily exceed the capacity of research computers, and a substructuring technique is essential to pursue research of this sort. Such a technique has been developed for use without modification to an existing code, i.e., it may be implemented on a standard finite element program directly.
Scale model study of the seismic response of a nuclear reactor core
International Nuclear Information System (INIS)
Dove, R.C.; Dunwoody, W.E.; Rhorer, R.L.
1983-01-01
The use of scale models to study the dynamics of a system of graphite core blocks used in certain nuclear reactor designs is described. Scaling laws, material selecton, model instrumentation to measure collision forces, and the response of several models to simulated seismic excitation are covered. The effects of Coulomb friction between the blocks and the clearance gaps between the blocks on the system response to seismic excitation are emphasized
Laser anemometry measurements of natural circulation flow in a scale model PWR system
International Nuclear Information System (INIS)
Kadambi, J.R.; Schneider, S.J.
1990-01-01
This paper reports on experimental studies conducted to investigate the natural circulation of a single-phase fluid in a scale model pressurized water reactor system during a postulated degraded core accident. A half-section of a 1/7 scale model with a plexiglass adiabatic window was used. Water and sulfurhexafluoride (SF 6 ) were used as the fluid. Laser-Doppler anemometry (LDA) was used in marking the velocity measurements along the center plane of the model at five elevations
Self-Organized Criticality in a Simple Neuron Model Based on Scale-Free Networks
International Nuclear Information System (INIS)
Lin Min; Wang Gang; Chen Tianlun
2006-01-01
A simple model for a set of interacting idealized neurons in scale-free networks is introduced. The basic elements of the model are endowed with the main features of a neuron function. We find that our model displays power-law behavior of avalanche sizes and generates long-range temporal correlation. More importantly, we find different dynamical behavior for nodes with different connectivity in the scale-free networks.
Doubly stochastic Poisson process models for precipitation at fine time-scales
Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao
2012-09-01
This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.
Directory of Open Access Journals (Sweden)
Chaogui Kang
Full Text Available We generalized the recently introduced "radiation model", as an analog to the generalization of the classic "gravity model", to consolidate its nature of universality for modeling diverse mobility systems. By imposing the appropriate scaling exponent λ, normalization factor κ and system constraints including searching direction and trip OD constraint, the generalized radiation model accurately captures real human movements in various scenarios and spatial scales, including two different countries and four different cities. Our analytical results also indicated that the generalized radiation model outperformed alternative mobility models in various empirical analyses.
Simplified scaling model for the THETA-pinch
Ewing, K. J.; Thomson, D. B.
1982-02-01
A simple ID scaing model for the fast Theta pinch was developed and written as a code that would be flexible, inexpensive in computer time, and readily available for use with the Los Alamos explosive-driven high magnetic field program. The simplified model uses three successive separate stages: (1) a snowplow-like radial implosion, (2) an idealized resistive annihilation of reverse bias field, and (3) an adiabatic compression stage of a Beta = 1 plasma for which ideal pressure balance is assumed to hold. The code uses one adjustable fitting constant whose value was first determined by comparison with results from the Los Alamos Scylla III, Scyllacita, and Scylla IA Theta pinches.
Modelling large scale human activity in San Francisco
Gonzalez, Marta
2010-03-01
Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.
A Model of Socioemotional Flexibility at Three Time Scales
Hollenstein, T.P.; Lichtwarck-Aschoff, A.; Potworowski, G.
2013-01-01
The construct of flexibility has been a focus for research and theory for over 100 years. However, flexibility has not been consistently or adequately defined, leading to obstacles in the interpretation of past research and progress toward enhanced theory. We present a model of socioemotional
Dynamic modelling of heavy metals - time scales and target loads
Posch, M.; Vries, de W.
2009-01-01
Over the past decade steady-state methods have been developed to assess critical loads of metals avoiding long-term risks in view of food quality and eco-toxicological effects on organisms in soils and surface waters. However, dynamic models are needed to estimate the times involved in attaining a
Symmetry-guided large-scale shell-model theory
Czech Academy of Sciences Publication Activity Database
Launey, K. D.; Dytrych, Tomáš; Draayer, J. P.
2016-01-01
Roč. 89, JUL (2016), s. 101-136 ISSN 0146-6410 R&D Projects: GA ČR GA16-16772S Institutional support: RVO:61389005 Keywords : Ab intio shell -model theory * Symplectic symmetry * Collectivity * Clusters * Hoyle state * Orderly patterns in nuclei from first principles Subject RIV: BE - Theoretical Physics Impact factor: 11.229, year: 2016
Complex Automata: Multi-scale Modeling with Coupled Cellular Automata
Hoekstra, A.G.; Caiazzo, A.; Lorenz, E.; Falcone, J.-L.; Chopard, B.; Hoekstra, A.G.; Kroc, J.; Sloot, P.M.A.
2010-01-01
Cellular Automata (CA) are generally acknowledged to be a powerful way to describe and model natural phenomena [1-3]. There are even tempting claims that nature itself is one big (quantum) information processing system, e.g. [4], and that CA may actually be nature’s way to do this processing [5-7].
Development of realistic concrete models including scaling effects
International Nuclear Information System (INIS)
Carpinteri, A.
1989-09-01
Progressive cracking in structural elements of concrete is considered. Two simple models are applied, which, even though different, lead to similar predictions for the fracture behaviour. Both Virtual Crack Propagation Model and Cohesive Limit Analysis (Section 2), show a trend towards brittle behaviour and catastrophical events for large structural sizes. A numerical Cohesive Crack Model is proposed (Section 3) to describe strain softening and strain localization in concrete. Such a model is able to predict the size effects of fracture mechanics accurately. Whereas for Mode I, only untieing of the finite element nodes is applied to simulate crack growth, for Mixed Mode a topological variation is required at each step (Section 4). In the case of the four point shear specimen, the load vs. deflection diagrams reveal snap-back instability for large sizes. By increasing the specimen sizes, such instability tends to reproduce the classical LEFM instability. Remarkable size effects are theoretically predicted and experimentally confirmed also for reinforced concrete (Section 5). The brittleness of the flexural members increases by increasing size and/or decreasing steel content. On the basis of these results, the empirical code rules regarding the minimum amount of reinforcement could be considerably revised
Soil carbon management in large-scale Earth system modelling
DEFF Research Database (Denmark)
Olin, S.; Lindeskog, M.; Pugh, T. A. M.
2015-01-01
, carbon sequestration and nitrogen leaching from croplands are evaluated and discussed. Compared to the version of LPJ-GUESS that does not include land-use dynamics, estimates of soil carbon stocks and nitrogen leaching from terrestrial to aquatic ecosystems were improved. Our model experiments allow us...
Biological reduction of chlorinated solvents: Batch-scale geochemical modeling
Kouznetsova, Irina; Mao, Xiaomin; Robinson, Clare; Barry, D. A.; Gerhard, Jason I.; McCarty, Perry L.
2010-09-01
Simulation of biodegradation of chlorinated solvents in dense non-aqueous phase liquid (DNAPL) source zones requires a model that accounts for the complexity of processes involved and that is consistent with available laboratory studies. This paper describes such a comprehensive modeling framework that includes microbially mediated degradation processes, microbial population growth and decay, geochemical reactions, as well as interphase mass transfer processes such as DNAPL dissolution, gas formation and mineral precipitation/dissolution. All these processes can be in equilibrium or kinetically controlled. A batch modeling example was presented where the degradation of trichloroethene (TCE) and its byproducts and concomitant reactions (e.g., electron donor fermentation, sulfate reduction, pH buffering by calcite dissolution) were simulated. Local and global sensitivity analysis techniques were applied to delineate the dominant model parameters and processes. Sensitivity analysis indicated that accurate values for parameters related to dichloroethene (DCE) and vinyl chloride (VC) degradation (i.e., DCE and VC maximum utilization rates, yield due to DCE utilization, decay rate for DCE/VC dechlorinators) are important for prediction of the overall dechlorination time. These parameters influence the maximum growth rate of the DCE and VC dechlorinating microorganisms and, thus, the time required for a small initial population to reach a sufficient concentration to significantly affect the overall rate of dechlorination. Self-inhibition of chlorinated ethenes at high concentrations and natural buffering provided by the sediment were also shown to significantly influence the dechlorination time. Furthermore, the analysis indicated that the rates of the competing, nonchlorinated electron-accepting processes relative to the dechlorination kinetics also affect the overall dechlorination time. Results demonstrated that the model developed is a flexible research tool that is
State-of-the-Art Report on Multi-scale Modelling of Nuclear Fuels
International Nuclear Information System (INIS)
Bartel, T.J.; Dingreville, R.; Littlewood, D.; Tikare, V.; Bertolus, M.; Blanc, V.; Bouineau, V.; Carlot, G.; Desgranges, C.; Dorado, B.; Dumas, J.C.; Freyss, M.; Garcia, P.; Gatt, J.M.; Gueneau, C.; Julien, J.; Maillard, S.; Martin, G.; Masson, R.; Michel, B.; Piron, J.P.; Sabathier, C.; Skorek, R.; Toffolon, C.; Valot, C.; Van Brutzel, L.; Besmann, Theodore M.; Chernatynskiy, A.; Clarno, K.; Gorti, S.B.; Radhakrishnan, B.; Devanathan, R.; Dumont, M.; Maugis, P.; El-Azab, A.; Iglesias, F.C.; Lewis, B.J.; Krack, M.; Yun, Y.; Kurata, M.; Kurosaki, K.; Largenton, R.; Lebensohn, R.A.; Malerba, L.; Oh, J.Y.; Phillpot, S.R.; Tulenko, J. S.; Rachid, J.; Stan, M.; Sundman, B.; Tonks, M.R.; Williamson, R.; Van Uffelen, P.; Welland, M.J.; Valot, Carole; Stan, Marius; Massara, Simone; Tarsi, Reka
2015-10-01
The Nuclear Science Committee (NSC) of the Nuclear Energy Agency (NEA) has undertaken an ambitious programme to document state-of-the-art of modelling for nuclear fuels and structural materials. The project is being performed under the Working Party on Multi-Scale Modelling of Fuels and Structural Material for Nuclear Systems (WPMM), which has been established to assess the scientific and engineering aspects of fuels and structural materials, describing multi-scale models and simulations as validated predictive tools for the design of nuclear systems, fuel fabrication and performance. The WPMM's objective is to promote the exchange of information on models and simulations of nuclear materials, theoretical and computational methods, experimental validation and related topics. It also provides member countries with up-to-date information, shared data, models, and expertise. The goal is also to assess needs for improvement and address them by initiating joint efforts. The WPMM reviews and evaluates multi-scale modelling and simulation techniques currently employed in the selection of materials used in nuclear systems. It serves to provide advice to the nuclear community on the developments needed to meet the requirements of modelling for the design of different nuclear systems. The original WPMM mandate had three components (Figure 1), with the first component currently completed, delivering a report on the state-of-the-art of modelling of structural materials. The work on modelling was performed by three expert groups, one each on Multi-Scale Modelling Methods (M3), Multi-Scale Modelling of Fuels (M2F) and Structural Materials Modelling (SMM). WPMM is now composed of three expert groups and two task forces providing contributions on multi-scale methods, modelling of fuels and modelling of structural materials. This structure will be retained, with the addition of task forces as new topics are developed. The mandate of the Expert Group on Multi-Scale Modelling of
Directory of Open Access Journals (Sweden)
Ilse Storch
2002-06-01
Full Text Available This paper explores the effects of spatial resolution on the performance and applicability of habitat models in wildlife management and conservation. A Habitat Suitability Index (HSI model for the Capercaillie (Tetrao urogallus in the Bavarian Alps, Germany, is presented. The model was exclusively built on non-spatial, small-scale variables of forest structure and without any consideration of landscape patterns. The main goal was to assess whether a HSI model developed from small-scale habitat preferences can explain differences in population abundance at larger scales. To validate the model, habitat variables and indirect sign of Capercaillie use (such as feathers or feces were mapped in six study areas based on a total of 2901 20 m radius (for habitat variables and 5 m radius sample plots (for Capercaillie sign. First, the model's representation of Capercaillie habitat preferences was assessed. Habitat selection, as expressed by Ivlev's electivity index, was closely related to HSI scores, increased from poor to excellent habitat suitability, and was consistent across all study areas. Then, habitat use was related to HSI scores at different spatial scales. Capercaillie use was best predicted from HSI scores at the small scale. Lowering the spatial resolution of the model stepwise to 36-ha, 100-ha, 400-ha, and 2000-ha areas and relating Capercaillie use to aggregated HSI scores resulted in a deterioration of fit at larger scales. Most importantly, there were pronounced differences in Capercaillie abundance at the scale of study areas, which could not be explained by the HSI model. The results illustrate that even if a habitat model correctly reflects a species' smaller scale habitat preferences, its potential to predict population abundance at larger scales may remain limited.
Multi-scale modelling of uranyl chloride solutions
Energy Technology Data Exchange (ETDEWEB)
Nguyen, Thanh-Nghi; Duvail, Magali, E-mail: magali.duvail@icsm.fr; Villard, Arnaud; Dufrêche, Jean-François, E-mail: jean-francois.dufreche@univ-montp2.fr [Institut de Chimie Séparative de Marcoule (ICSM), UMR 5257, CEA-CNRS-Université Montpellier 2-ENSCM, Site de Marcoule, Bâtiment 426, BP 17171, F-30207 Bagnols-sur-Cèze Cedex (France); Molina, John Jairo [Fukui Institute for Fundamental Chemistry, Kyoto University, Takano-Nishihiraki-cho 34-4, Sakyo-ku, Kyoto 606-8103 (Japan); Guilbaud, Philippe [CEA/DEN/DRCP/SMCS/LILA, Marcoule, F-30207 Bagnols-sur-Cèze Cedex (France)
2015-01-14
Classical molecular dynamics simulations with explicit polarization have been successfully used to determine the structural and thermodynamic properties of binary aqueous solutions of uranyl chloride (UO{sub 2}Cl{sub 2}). Concentrated aqueous solutions of uranyl chloride have been studied to determine the hydration properties and the ion-ion interactions. The bond distances and the coordination number of the hydrated uranyl are in good agreement with available experimental data. Two stable positions of chloride in the second hydration shell of uranyl have been identified. The UO{sub 2}{sup 2+}-Cl{sup −} association constants have also been calculated using a multi-scale approach. First, the ion-ion potential averaged over the solvent configurations at infinite dilution (McMillan-Mayer potential) was calculated to establish the dissociation/association processes of UO{sub 2}{sup 2+}-Cl{sup −} ion pairs in aqueous solution. Then, the association constant was calculated from this potential. The value we obtained for the association constant is in good agreement with the experimental result (K{sub UO{sub 2Cl{sup +}}} = 1.48 l mol{sup −1}), but the resulting activity coefficient appears to be too low at molar concentration.
Scaling and criticality in a stochastic multi-agent model of a financial market
Lux, Thomas; Marchesi, Michele
1999-02-01
Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way - from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent `efficient market hypothesis' in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the `input' signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the `news arrival process' in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents.
Neural assembly models derived through nano-scale measurements.
Energy Technology Data Exchange (ETDEWEB)
Fan, Hongyou; Branda, Catherine; Schiek, Richard Louis; Warrender, Christina E.; Forsythe, James Chris
2009-09-01
This report summarizes accomplishments of a three-year project focused on developing technical capabilities for measuring and modeling neuronal processes at the nanoscale. It was successfully demonstrated that nanoprobes could be engineered that were biocompatible, and could be biofunctionalized, that responded within the range of voltages typically associated with a neuronal action potential. Furthermore, the Xyce parallel circuit simulator was employed and models incorporated for simulating the ion channel and cable properties of neuronal membranes. The ultimate objective of the project had been to employ nanoprobes in vivo, with the nematode C elegans, and derive a simulation based on the resulting data. Techniques were developed allowing the nanoprobes to be injected into the nematode and the neuronal response recorded. To the authors's knowledge, this is the first occasion in which nanoparticles have been successfully employed as probes for recording neuronal response in an in vivo animal experimental protocol.
The Waterfall Model in Large-Scale Development
Petersen, Kai; Wohlin, Claes; Baca, Dejan
Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.
Modelling Protein Dynamics on the Microsecond Time Scale
DEFF Research Database (Denmark)
Siuda, Iwona Anna
Recent years have shown an increase in coarse-grained (CG) molecular dynamics simulations, providing structural and dynamic details of large proteins and enabling studies of self-assembly of biological materials. It is not easy to acquire such data experimentally, and access is also still limited...... in atomistic simulations. During her PhD studies, Iwona Siuda used MARTINI CG models to study the dynamics of different globular and membrane proteins. In several cases, the MARTINI model was sufficient to study conformational changes of small, purely alpha-helical proteins. However, in studies of larger......ELNEDIN was therefore proposed as part of the work. Iwona Siuda’s results from the CG simulations had biological implications that provide insights into possible mechanisms of the periplasmic leucine-binding protein, the sarco(endo)plasmic reticulum calcium pump, and several proteins from the saposin-like proteins...
The waterfall model in large-scale development
Petersen, Kai; Wohlin, Claes; Baca, Dejan
2009-01-01
Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature wit...
Multistability in Large Scale Models of Brain Activity.
Directory of Open Access Journals (Sweden)
Mathieu Golos
2015-12-01
Full Text Available Noise driven exploration of a brain network's dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network's capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain's dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system's attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i a uniform activation threshold or (ii a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the "resting state" condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors.
Electromagnetic Drop Scale Scattering Modelling for Dynamic Statistical Rain Fields
Hipp, Susanne
2015-01-01
This work simulates the scattering of electromagnetic waves by a rain field. The calculations are performed for the individual drops and accumulate to a time signal dependent on the dynamic properties of the rain field. The simulations are based on the analytical Mie scattering model for spherical rain drops and the simulation software considers the rain characteristics drop size (including their distribution in rain), motion, and frequency and temperature dependent permittivity. The performe...
Superstring-inspired SO(10) GUT model with intermediate scale
Sasaki, Ken
1987-12-01
A new mechanism is proposed for the mixing of Weinberg-Salam Higgs fields in superstring-inspired SO(10) models with no SO(10) singlet fields. The higher-dimensional terms in the superpotential can generate both Higgs field mixing and a small mass for the physical neutrino. I would like to thank Professor C. Iso for hospitality extended to me at the Tokyo Institute of Technology.
Scale Factor Study for 1:30 Local Scour Model
2016-08-01
establishes the worst- case scour depth for the current bridge configuration and the proposed pier nose extension. INTRODUCTION : Extensive research has been...used in the general physical model. A flat test section, approximately 32 ft long and 34–45 ft wide, was molded to a uniform elevation . Stilling...discharge calculation from the flow uniformity checks. The water surface elevation was controlled with the adjustable lift gate at the downstream
Globular cluster metallicity scale: evidence from stellar models
International Nuclear Information System (INIS)
Demarque, P.; King, C.R.; Diaz, A.
1982-01-01
Theoretical giant branches have been constructed to determine their relative positions for metallicities in the range -2.3 0 )/sub 0,g/ based on these models is presented which yields good agreement over the observed range of metallicities for galactic globular clusters and old disk clusters. The metallicity of 47 Tuc and M71 given by this calibration is about -0.8 dex. Subject headings: clusters, globular: stars: abundances: stars: interiors
Process-scale modeling of elevated wintertime ozone in Wyoming.
Energy Technology Data Exchange (ETDEWEB)
Kotamarthi, V. R.; Holdridge, D. J.; Environmental Science Division
2007-12-31
Measurements of meteorological variables and trace gas concentrations, provided by the Wyoming Department of Environmental Quality for Daniel, Jonah, and Boulder Counties in the state of Wyoming, were analyzed for this project. The data indicate that highest ozone concentrations were observed at temperatures of -10 C to 0 C, at low wind speeds of about 5 mph. The median values for nitrogen oxides (NOx) during these episodes ranged between 10 ppbv and 20 ppbv (parts per billion by volume). Measurements of volatile organic compounds (VOCs) during these periods were insufficient for quantitative analysis. The few available VOCs measurements indicated unusually high levels of alkanes and aromatics and low levels of alkenes. In addition, the column ozone concentration during one of the high-ozone episodes was low, on the order of 250 DU (Dobson unit) as compared to a normal column ozone concentration of approximately 300-325 DU during spring for this region. Analysis of this observation was outside the scope of this project. The data analysis reported here was used to establish criteria for making a large number of sensitivity calculations through use of a box photochemical model. Two different VOCs lumping schemes, RACM and SAPRC-98, were used for the calculations. Calculations based on this data analysis indicated that the ozone mixing ratios are sensitive to (a) surface albedo, (b) column ozone, (c) NOx mixing ratios, and (d) available terminal olefins. The RACM model showed a large response to an increase in lumped species containing propane that was not reproduced by the SAPRC scheme, which models propane as a nearly independent species. The rest of the VOCs produced similar changes in ozone in both schemes. In general, if one assumes that measured VOCs are fairly representative of the conditions at these locations, sufficient precursors might be available to produce ozone in the range of 60-80 ppbv under the conditions modeled.
Large transverse momentum processes in a non-scaling parton model
International Nuclear Information System (INIS)
Stirling, W.J.
1977-01-01
The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)
Design, construction, and evaluation of a 1:8 scale model binaural manikin.
Robinson, Philip; Xiang, Ning
2013-03-01
Many experiments in architectural acoustics require presenting listeners with simulations of different rooms to compare. Acoustic scale modeling is a feasible means to create accurate simulations of many rooms at reasonable cost. A critical component in a scale model room simulation is a receiver that properly emulates a human receiver. For this purpose, a scale model artificial head has been constructed and tested. This paper presents the design and construction methods used, proper equalization procedures, and measurements of its response. A headphone listening experiment examining sound externalization with various reflection conditions is presented that demonstrates its use for psycho-acoustic testing.
Kinetic model for torrefaction of wood chips in a pilot-scale continuous reactor
DEFF Research Database (Denmark)
Shang, Lei; Ahrenfeldt, Jesper; Holm, Jens Kai
2014-01-01
accordance with the model data. In an additional step a continuous, pilot scale reactor was built to produce torrefied wood chips in large quantities. The "two-step reaction in series" model was applied to predict the mass yield of the torrefaction reaction. Parameters used for the calculation were...... at different torrefaction temperatures, it was possible to predict the HHV of torrefied wood chips from the pilot reactor. The results from this study and the presented modeling approach can be used to predict the product quality from pilot scale torrefaction reactors based on small scale experiments and could...
Improving large-scale groundwater models by considering fossil gradients
Schulz, Stephan; Walther, Marc; Michelsen, Nils; Rausch, Randolf; Dirks, Heiko; Al-Saud, Mohammed; Merz, Ralf; Kolditz, Olaf; Schüth, Christoph
2017-05-01
Due to limited availability of surface water, many arid to semi-arid countries rely on their groundwater resources. Despite the quasi-absence of present day replenishment, some of these groundwater bodies contain large amounts of water, which was recharged during pluvial periods of the Late Pleistocene to Early Holocene. These mostly fossil, non-renewable resources require different management schemes compared to those which are usually applied in renewable systems. Fossil groundwater is a finite resource and its withdrawal implies mining of aquifer storage reserves. Although they receive almost no recharge, some of them show notable hydraulic gradients and a flow towards their discharge areas, even without pumping. As a result, these systems have more discharge than recharge and hence are not in steady state, which makes their modelling, in particular the calibration, very challenging. In this study, we introduce a new calibration approach, composed of four steps: (i) estimating the fossil discharge component, (ii) determining the origin of fossil discharge, (iii) fitting the hydraulic conductivity with a pseudo steady-state model, and (iv) fitting the storage capacity with a transient model by reconstructing head drawdown induced by pumping activities. Finally, we test the relevance of our approach and evaluated the effect of considering or ignoring fossil gradients on aquifer parameterization for the Upper Mega Aquifer (UMA) on the Arabian Peninsula.
Regional scale groundwater modelling study for Ganga River basin
Maheswaran, R.; Khosa, R.; Gosain, A. K.; Lahari, S.; Sinha, S. K.; Chahar, B. R.; Dhanya, C. T.
2016-10-01
Subsurface movement of water within the alluvial formations of Ganga Basin System of North and East India, extending over an area of 1 million km2, was simulated using Visual MODFLOW based transient numerical model. The study incorporates historical groundwater developments as recorded by various concerned agencies and also accommodates the role of some of the major tributaries of River Ganga as geo-hydrological boundaries. Geo-stratigraphic structures, along with corresponding hydrological parameters,were obtained from Central Groundwater Board, India,and used in the study which was carried out over a time horizon of 4.5 years. The model parameters were fine tuned for calibration using Parameter Estimation (PEST) simulations. Analyses of the stream aquifer interaction using Zone Budget has allowed demarcation of the losing and gaining stretches along the main stem of River Ganga as well as some of its principal tributaries. From a management perspective,and entirely consistent with general understanding, it is seen that unabated long term groundwater extraction within the study basin has induced a sharp decrease in critical dry weather base flow contributions. In view of a surge in demand for dry season irrigation water for agriculture in the area, numerical models can be a useful tool to generate not only an understanding of the underlying groundwater system but also facilitate development of basin-wide detailed impact scenarios as inputs for management and policy action.
Cellular Automata for Modeling the field-scale erosion
International Nuclear Information System (INIS)
Diaz Suarez, Jorge; Bagarotti Marin, Angel; Ruiz Perez, Maria Elena
2008-01-01
Full text: The Cellular Automaton (CA) is a system used discrete dynamic modeling of many physical systems. Their fundamental properties are the interaction at the local level, homogeneity and parallelism. It has been used as a secondary for the simulation of large systems where the use of equations in partial derivatives is complex and costly from the computational point of view. On the other hand, the high complexity of spatial interaction in the processes involved in the erosion-transport-deposition of sediments at field level, considerably limiting the use of base models physics. The objective of this study is to model the main processes involved in erosion water supply of soils through the use of the CAMELot system, based on an extension of the original paradigm of the CA. The CAMELot system has been used in the simulation of systems of large spatial extent, where the laws of local interaction between automata have a deep physical sense. This system guarantees both the input of the necessary specifications and simulation in parallel, as the visualization and the general management of the system. They are exposed to each of the submodels used in it and the overall dynamics of the system is analyzed. (author)
Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi
2015-01-01
three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.
The Ising model in the scaling limit as model for the description of elementary particles
International Nuclear Information System (INIS)
Weinzierl, W.
1981-01-01
In this thesis a possible way is stepped over which starts from the derivation of a quantum field theory from simplest statistical degrees of freedom, as for instance in a two-level system. On a model theory, the Ising model in (1+1) dimensions the idea is explained. In this model theory two particle-interpretable quantum fields arise which can be constructed by a basic field which parametrizes the local dynamics in a simplest way. This so called proliferation is further examined. For the proliferation of the basic field a conserved quantity, a kind of parity is necessary. The stability of both particle fields is a consequence of this conservation law. For the identification of the ''particle-interpretable'' fields the propagators of the order and disorder parameter field are calculated and discussed. An effective Hamiltonian in this particle fields is calculated. As further aspect of this transition from the statistical system to quantum field theory the dimensional transmutation and the closely to this connected mass renormalization is examined. The relation between spin systems in the critical region and fermionic field theories is explained. Thereby it results that certain fermionic degrees of freedom of the spin system vanish in the scaling limit. The ''macroscopically'' relevant degrees of freedom constitute a relativistic Majorana field. (orig./HSI) [de
International Nuclear Information System (INIS)
Bergamasco, A.; Carniel, S.; Sclavo, M.; Budgell, W.P.
2005-01-01
Conveyor belt circulation controls global climate through heat and water fluxes with atmosphere and from tropical to polar regions and vice versa. This circulation, commonly referred to as thermohaline circulation (THC), seems to have millennium time scale and nowadays-a non-glacial period-appears to be as rather stable. However, concern is raised by the buildup of CO 2 and other greenhouse gases in the atmosphere (IPCC, Third assessment report: Climate Change 2001. A contribution 01 working group I, n and In to the Third Assessment Report of the intergovernmental Panel on Climate Change (Cambridge Univ. Press, UK) 2001, http://www.ipcc.ch) as these may affect the THC conveyor paths. Since it is widely recognized that dense water formation sites ad as primary sources in strengthening quasi-stable THC paths (Stommel H., Tellus, 13 (1961) 224), in order to simulate properly the consequences of such scenarios a better understanding of these oceanic processes is needed. To successfully model these processes, air sea-ice-integrated modelling approaches are often required. Here we focus on two polar regions using the Regional Ocean Modeling System (ROMS). In the first region investigated, the North Atlantic-Arctic, where open-ocean Jeep convection and open-sea ire formation and dispersion under the intense air-sea interactions are the major engines, we use a new version of the coupled hydrodynamic-ice ROMS model. The second area belongs to the Antarctica region inside the Southern Ocean, where brine rejections during ice formation inside shelf seas origin dense water that, flowing along the continental slope, overflow becoming eventually abyssal waters. Results show how nowadays integrated-modelling tasks have become more and more feasible and effective; numerical simulations dealing with large computational domains or challenging different climate scenarios can be run on multi-processors platforms and on systems like LINUX clusters, made of the same hardware as PCs, and
Multi-scale modeling of inter-granular fracture in UO2
Energy Technology Data Exchange (ETDEWEB)
Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tonks, Michael R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, S. Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-03-01
A hierarchical multi-scale approach is pursued in this work to investigate the influence of porosity, pore and grain size on the intergranular brittle fracture in UO2. In this approach, molecular dynamics simulations are performed to obtain the fracture properties for different grain boundary types. A phase-field model is then utilized to perform intergranular fracture simulations of representative microstructures with different porosities, pore and grain sizes. In these simulations the grain boundary fracture properties obtained from molecular dynamics simulations are used. The responses from the phase-field fracture simulations are then fitted with a stress-based brittle fracture model usable at the engineering scale. This approach encapsulates three different length and time scales, and allows the development of microstructurally informed engineering scale model from properties evaluated at the atomistic scale.
Directory of Open Access Journals (Sweden)
Mark Baah-Acheamfour
Full Text Available There are a number of overarching questions and debate in the scientific community concerning the importance of biotic interactions in species distribution models at large spatial scales. In this paper, we present a framework for revising the potential distribution of tree species native to the Western Ecoregion of Nova Scotia, Canada, by integrating the long-term effects of interspecific competition into an existing abiotic-factor-based definition of potential species distribution (PSD. The PSD model is developed by combining spatially explicit data of individualistic species' response to normalized incident photosynthetically active radiation, soil water content, and growing degree days. A revised PSD model adds biomass output simulated over a 100-year timeframe with a robust forest gap model and scaled up to the landscape using a forestland classification technique. To demonstrate the method, we applied the calculation to the natural range of 16 target tree species as found in 1,240 provincial forest-inventory plots. The revised PSD model, with the long-term effects of interspecific competition accounted for, predicted that eastern hemlock (Tsuga canadensis, American beech (Fagus grandifolia, white birch (Betula papyrifera, red oak (Quercus rubra, sugar maple (Acer saccharum, and trembling aspen (Populus tremuloides would experience a significant decline in their original distribution compared with balsam fir (Abies balsamea, black spruce (Picea mariana, red spruce (Picea rubens, red maple (Acer rubrum L., and yellow birch (Betula alleghaniensis. True model accuracy improved from 64.2% with original PSD evaluations to 81.7% with revised PSD. Kappa statistics slightly increased from 0.26 (fair to 0.41 (moderate for original and revised PSDs, respectively.
Optimization of large-scale heterogeneous system-of-systems models.
Energy Technology Data Exchange (ETDEWEB)
Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)
2012-01-01
Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.
A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation
Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.
2016-12-01
Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.
Mitchell, Matthew G E; Johansen, Kasper; Maron, Martine; McAlpine, Clive A; Wu, Dan; Rhodes, Jonathan R
2018-05-01
Urban areas are sources of land use change and CO 2 emissions that contribute to global climate change. Despite this, assessments of urban vegetation carbon stocks often fail to identify important landscape-scale drivers of variation in urban carbon, especially the potential effects of landscape structure variables at different spatial scales. We combined field measurements with Light Detection And Ranging (LiDAR) data to build high-resolution models of woody plant aboveground carbon across the urban portion of Brisbane, Australia, and then identified landscape scale drivers of these carbon stocks. First, we used LiDAR data to quantify the extent and vertical structure of vegetation across the city at high resolution (5×5m). Next, we paired this data with aboveground carbon measurements at 219 sites to create boosted regression tree models and map aboveground carbon across the city. We then used these maps to determine how spatial variation in land cover/land use and landscape structure affects these carbon stocks. Foliage densities above 5m height, tree canopy height, and the presence of ground openings had the strongest relationships with aboveground carbon. Using these fine-scale relationships, we estimate that 2.2±0.4 TgC are stored aboveground in the urban portion of Brisbane, with mean densities of 32.6±5.8MgCha -1 calculated across the entire urban land area, and 110.9±19.7MgCha -1 calculated within treed areas. Predicted carbon densities within treed areas showed strong positive relationships with the proportion of surrounding tree cover and how clumped that tree cover was at both 1km 2 and 1ha resolutions. Our models predict that even dense urban areas with low tree cover can have high carbon densities at fine scales. We conclude that actions and policies aimed at increasing urban carbon should focus on those areas where urban tree cover is most fragmented. Copyright © 2017 Elsevier B.V. All rights reserved.
Truncated conformal space approach to scaling Lee-Yang model
International Nuclear Information System (INIS)
Yurov, V.P.; Zamolodchikov, Al.B.
1989-01-01
A numerical approach to 2D relativstic field theories is suggested. Considering a field theory model as an ultraviolet conformal field theory perturbed by suitable relevant scalar operator one studies it in finite volume (on a circle). The perturbed Hamiltonian acts in the conformal field theory space of states and its matrix elements can be extracted from the conformal field theory. Truncation of the space at reasonable level results in a finite dimensional problem for numerical analyses. The nonunitary field theory with the ultraviolet region controlled by the minimal conformal theory μ(2/5) is studied in detail. 9 refs.; 17 figs
Assessing and modelling ecohydrologic processes at the agricultural field scale
Basso, Bruno
2015-04-01
One of the primary goals of agricultural management is to increase the amount of crop produced per unit of fertilizer and water used. World record corn yields demonstrated that water use efficiency can increase fourfold with improved agronomic management and cultivars able to tolerate high densities. Planting crops with higher plant density can lead to significant yield increases, and increase plant transpiration vs. soil water evaporation. Precision agriculture technologies have been adopted for the last twenty years but seldom have the data collected been converted to information that led farmers to different agronomic management. These methods are intuitively appealing, but yield maps and other spatial layers of data need to be properly analyzed and interpreted to truly become valuable. Current agro-mechanic and geospatial technologies allow us to implement a spatially variable plan for agronomic inputs including seeding rate, cultivars, pesticides, herbicides, fertilizers, and water. Crop models are valuable tools to evaluate the impact of management strategies (e.g., cover crops, tile drains, and genetically-improved cultivars) on yield, soil carbon sequestration, leaching and greenhouse gas emissions. They can help farmers identify adaptation strategies to current and future climate conditions. In this paper I illustrate the key role that precision agriculture technologies (yield mapping technologies, within season soil and crop sensing), crop modeling and weather can play in dealing with the impact of climate variability on soil ecohydrologic processes. Case studies are presented to illustrate this concept.
International Nuclear Information System (INIS)
Duraisamy Jothiprakasam, Venkatesh
2014-01-01
The development of wind energy generation requires precise and well-established methods for wind resource assessment, which is the initial step in every wind farm project. During the last two decades linear flow models were widely used in the wind industry for wind resource assessment and micro-siting. But the linear models inaccuracies in predicting the wind speeds in very complex terrain are well known and led to use of CFD, capable of modeling the complex flow in details around specific geographic features. Mesoscale models (NWP) are able to predict the wind regime at resolutions of several kilometers, but are not well suited to resolve the wind speed and turbulence induced by the topography features on the scale of a few hundred meters. CFD has proven successful in capturing flow details at smaller scales, but needs an accurate specification of the inlet conditions. Thus coupling NWP and CFD models is a better modeling approach for wind energy applications. A one-year field measurement campaign carried out in a complex terrain in southern France during 2007-2008 provides a well-documented data set both for input and validation data. The proposed new methodology aims to address two problems: the high spatial variation of the topography on the domain lateral boundaries, and the prediction errors of the mesoscale model. It is applied in this work using the open source CFD code Code-Saturne, coupled with the mesoscale forecast model of Meteo-France (ALADIN). The improvement is obtained by combining the mesoscale data as inlet condition and field measurement data assimilation into the CFD model. Newtonian relaxation (nudging) data assimilation technique is used to incorporate the measurement data into the CFD simulations. The methodology to reconstruct long term averages uses a clustering process to group the similar meteorological conditions and to reduce the number of CFD simulations needed to reproduce 1 year of atmospheric flow over the site. The assimilation
Preston, Kathleen Suzanne Johnson; Parral, Skye N.; Gottfried, Allen W.; Oliver, Pamella H.; Gottfried, Adele Eskeles; Ibrahim, Sirena M.; Delany, Danielle
2015-01-01
A psychometric analysis was conducted using the nominal response model under the item response theory framework to construct the Positive Family Relationships scale. Using data from the Fullerton Longitudinal Study, this scale was constructed within a long-term longitudinal framework spanning middle childhood through adolescence. Items tapping…
A psychometric revision of the Asian values scale using the Rasch model
Kim, Bryan S. K.; Hong, Sehee
2004-01-01
The 36-item Asian Values Scale (B. S. K. Kim, D. R. Atkinson, & P H. Yang, 1999) was revised on the basis of G. Rasch (1960) model and data from 618 Asian Americans. The results led to the establishment of a 25-item measure named the Asian Values Scale-Revised.
Hong, S; Kim, Bryan S.K.; Wolfe, M M
2005-01-01
The 18-item European American Values Scale for Asian Americans (M. M. Wolfe, P H. Yang, E C. Wong, & D. R. Atkinson, 2001) was revised on the basis of results from a psychometric analysis using the Rasch Model (G. Rasch,1960). The results led to the establishment of the 25-item European AmericanValues Scale for Asian Americans-Revised.
The research of selection model based on LOD in multi-scale display of electronic map
Zhang, Jinming; You, Xiong; Liu, Yingzhen
2008-10-01
This paper proposes a selection model based on LOD to aid the display of electronic map. The ratio of display scale to map scale is regarded as a LOD operator. The categorization rule, classification rule, elementary rule and spatial geometry character rule of LOD operator setting are also concluded.
Item Response Theory Models for Wording Effects in Mixed-Format Scales
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu
2015-01-01
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Multi-scale modeling with cellular automata: The complex automata approach
Hoekstra, A.G.; Falcone, J.-L.; Caiazzo, A.; Chopard, B.
2008-01-01
Cellular Automata are commonly used to describe complex natural phenomena. In many cases it is required to capture the multi-scale nature of these phenomena. A single Cellular Automata model may not be able to efficiently simulate a wide range of spatial and temporal scales. It is our goal to
Scaling-up spatially-explicit ecological models using graphics processors
Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis
2011-01-01
How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to
A Structural Equation Modelling of the Academic Self-Concept Scale
Matovu, Musa
2014-01-01
The study aimed at validating the academic self-concept scale by Liu and Wang (2005) in measuring academic self-concept among university students. Structural equation modelling was used to validate the scale which was composed of two subscales; academic confidence and academic effort. The study was conducted on university students; males and…
Multi-scale inference of interaction rules in animal groups using Bayesian model selection.
Directory of Open Access Journals (Sweden)
Richard P Mann
2012-01-01
Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.
Scaling for deuteron structure functions in a relativistic light-front model
International Nuclear Information System (INIS)
Polyzou, W.N.; Gloeckle, W.
1996-01-01
Scaling limits of the structure functions [B.D. Keister, Phys. Rev. C 37, 1765 (1988)], W 1 and W 2 , are studied in a relativistic model of the two-nucleon system. The relativistic model is defined by a unitary representation, U(Λ,a), of the Poincaracute e group which acts on the Hilbert space of two spinless nucleons. The representation is in Dirac close-quote s [P.A.M. Dirac, Rev. Mod. Phys. 21, 392 (1949)] light-front formulation of relativistic quantum mechanics and is designed to give the experimental deuteron mass and n-p scattering length. A model hadronic current operator that is conserved and covariant with respect to this representation is used to define the structure tensor. This work is the first step in a relativistic extension of the results of Hueber, Gloeckle, and Boemelburg. The nonrelativistic limit of the model is shown to be consistent with the nonrelativistic model of Hueber, Gloeckle, and Boemelburg. [D. Hueber et al. Phys. Rev. C 42, 2342 (1990)]. The relativistic and nonrelativistic scaling limits, for both Bjorken and y scaling are compared. The interpretation of y scaling in the relativistic model is studied critically. The standard interpretation of y scaling requires a soft wave function which is not realized in this model. The scaling limits in both the relativistic and nonrelativistic case are related to probability distributions associated with the target deuteron. copyright 1996 The American Physical Society
Advanced computational workflow for the multi-scale modeling of the bone metabolic processes.
Dao, Tien Tuan
2017-06-01
Multi-scale modeling of the musculoskeletal system plays an essential role in the deep understanding of complex mechanisms underlying the biological phenomena and processes such as bone metabolic processes. Current multi-scale models suffer from the isolation of sub-models at each anatomical scale. The objective of this present work was to develop a new fully integrated computational workflow for simulating bone metabolic processes at multi-scale levels. Organ-level model employs multi-body dynamics to estimate body boundary and loading conditions from body kinematics. Tissue-level model uses finite element method to estimate the tissue deformation and mechanical loading under body loading conditions. Finally, cell-level model includes bone remodeling mechanism through an agent-based simulation under tissue loading. A case study on the bone remodeling process located on the human jaw was performed and presented. The developed multi-scale model of the human jaw was validated using the literature-based data at each anatomical level. Simulation outcomes fall within the literature-based ranges of values for estimated muscle force, tissue loading and cell dynamics during bone remodeling process. This study opens perspectives for accurately simulating bone metabolic processes using a fully integrated computational workflow leading to a better understanding of the musculoskeletal system function from multiple length scales as well as to provide new informative data for clinical decision support and industrial applications.
Directory of Open Access Journals (Sweden)
Jensen Paul A
2011-09-01
Full Text Available Abstract Background Several methods have been developed for analyzing genome-scale models of metabolism and transcriptional regulation. Many of these methods, such as Flux Balance Analysis, use constrained optimization to predict relationships between metabolic flux and the genes that encode and regulate enzyme activity. Recently, mixed integer programming has been used to encode these gene-protein-reaction (GPR relationships into a single optimization problem, but these techniques are often of limited generality and lack a tool for automating the conversion of rules to a coupled regulatory/metabolic model. Results We present TIGER, a Toolbox for Integrating Genome-scale Metabolism, Expression, and Regulation. TIGER converts a series of generalized, Boolean or multilevel rules into a set of mixed integer inequalities. The package also includes implementations of existing algorithms to integrate high-throughput expression data with genome-scale models of metabolism and transcriptional regulation. We demonstrate how TIGER automates the coupling of a genome-scale metabolic model with GPR logic and models of transcriptional regulation, thereby serving as a platform for algorithm development and large-scale metabolic analysis. Additionally, we demonstrate how TIGER's algorithms can be used to identify inconsistencies and improve existing models of transcriptional regulation with examples from the reconstructed transcriptional regulatory network of Saccharomyces cerevisiae. Conclusion The TIGER package provides a consistent platform for algorithm development and extending existing genome-scale metabolic models with regulatory networks and high-throughput data.
Multi-scale friction modeling for sheet metal forming: the boundary lubrication regime
Hol, J.D.; Meinders, Vincent T.; de Rooij, Matthias B.; van den Boogaard, Antonius H.
2015-01-01
A physical based friction model is presented to describe friction in full-scale forming simulations. The advanced friction model accounts for the change in surface topography and the evolution of friction in the boundary lubrication regime. The implementation of the friction model in FE software
Gasda, Sarah E.
2012-07-01
Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.
Regional-Scale Climate Change: Observations and Model Simulations
Energy Technology Data Exchange (ETDEWEB)
Bradley, Raymond S; Diaz, Henry F
2010-12-14
This collaborative proposal addressed key issues in understanding the Earth's climate system, as highlighted by the U.S. Climate Science Program. The research focused on documenting past climatic changes and on assessing future climatic changes based on suites of global and regional climate models. Geographically, our emphasis was on the mountainous regions of the world, with a particular focus on the Neotropics of Central America and the Hawaiian Islands. Mountain regions are zones where large variations in ecosystems occur due to the strong climate zonation forced by the topography. These areas are particularly susceptible to changes in critical ecological thresholds, and we conducted studies of changes in phonological indicators based on various climatic thresholds.
Weighted Distances in Scale-Free Configuration Models
Adriaans, Erwin; Komjáthy, Júlia
2018-01-01
In this paper we study first-passage percolation in the configuration model with empirical degree distribution that follows a power-law with exponent τ \\in (2,3) . We assign independent and identically distributed (i.i.d.) weights to the edges of the graph. We investigate the weighted distance (the length of the shortest weighted path) between two uniformly chosen vertices, called typical distances. When the underlying age-dependent branching process approximating the local neighborhoods of vertices is found to produce infinitely many individuals in finite time—called explosive branching process—Baroni, Hofstad and the second author showed in Baroni et al. (J Appl Probab 54(1):146-164, 2017) that typical distances converge in distribution to a bounded random variable. The order of magnitude of typical distances remained open for the τ \\in (2,3) case when the underlying branching process is not explosive. We close this gap by determining the first order of magnitude of typical distances in this regime for arbitrary, not necessary continuous edge-weight distributions that produce a non-explosive age-dependent branching process with infinite mean power-law offspring distributions. This sequence tends to infinity with the amount of vertices, and, by choosing an appropriate weight distribution, can be tuned to be any growing function that is O(log log n) , where n is the number of vertices in the graph. We show that the result remains valid for the the erased configuration model as well, where we delete loops and any second and further edges between two vertices.
Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge
Park, Heon-Joon; Lee, Changyeol
2017-04-01
Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).
Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey
2016-01-01
Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...
Gorrick, S.; Rodriguez, J. F.
2011-12-01
A movable bed physical model was designed in a laboratory flume to simulate both bed and suspended load transport in a mildly sinuous sand-bed stream. Model simulations investigated the impact of different vegetation arrangements along the outer bank to evaluate rehabilitation options. Preserving similitude in the 1:16 laboratory model was very important. In this presentation the scaling approach, as well as the successes and challenges of the strategy are outlined. Firstly a near-bankfull flow event was chosen for laboratory simulation. In nature, bankfull events at the field site deposit new in-channel features but cause only small amounts of bank erosion. Thus the fixed banks in the model were not a drastic simplification. Next, and as in other studies, the flow velocity and turbulence measurements were collected in separate fixed bed experiments. The scaling of flow in these experiments was simply maintained by matching the Froude number and roughness levels. The subsequent movable bed experiments were then conducted under similar hydrodynamic conditions. In nature, the sand-bed stream is fairly typical; in high flows most sediment transport occurs in suspension and migrating dunes cover the bed. To achieve similar dynamics in the model equivalent values of the dimensionless bed shear stress and the particle Reynolds number were important. Close values of the two dimensionless numbers were achieved with lightweight sediments (R=0.3) including coal and apricot pips with a particle size distribution similar to that of the field site. Overall the moveable bed experiments were able to replicate the dominant sediment dynamics present in the stream during a bankfull flow and yielded relevant information for the analysis of the effects of riparian vegetation. There was a potential conflict in the strategy, in that grain roughness was exaggerated with respect to nature. The advantage of this strategy is that although grain roughness is exaggerated, the similarity of
Thogmartin, W.E.; Knutson, M.G.
2007-01-01
Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.
Directory of Open Access Journals (Sweden)
Mazda Biglari
2016-06-01
Full Text Available Two modeling approaches, the scaling-law and CFD (Computational Fluid Dynamics approaches, are presented in this paper. To save on experimental cost of the pilot plant, the scaling-law approach as a low-computational-cost method was adopted and a small scale column operating under ambient temperature and pressure was built. A series of laboratory tests and computer simulations were carried out to evaluate the hydrodynamic characteristics of a pilot fluidized-bed biomass gasifier. In the small scale column solids were fluidized. The pressure and other hydrodynamic properties were monitored for the validation of the scaling-law application. In addition to the scaling-law modeling method, the CFD approach was presented to simulate the gas-particle system in the small column. 2D CFD models were developed to simulate the hydrodynamic regime. The simulation results were valid