WorldWideScience

Sample records for model large-scale printhead

  1. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  2. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  3. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  4. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  5. Modelling large-scale hydrogen infrastructure development

    International Nuclear Information System (INIS)

    De Groot, A.; Smit, R.; Weeda, M.

    2005-08-01

    In modelling a possible H2 infrastructure development the following questions are answered in this presentation: How could the future demand for H2 develop in the Netherlands?; and In which year and where would it be economically viable to construct a H2 infrastructure in the Netherlands? Conclusions are that: A model for describing a possible future H2 infrastructure is successfully developed; The model is strongly regional and time dependent; Decrease of fuel cell cost appears to be a sensitive parameter for development of H2 demand; Cost-margin between large-scale and small-scale H2 production is a main driver for development of a H2 infrastructure; A H2 infrastructure seems economically viable in the Netherlands starting from the year 2022

  6. Effects of dwell time of excitation waveform on meniscus movements for a tubular piezoelectric print-head: experiments and model

    Science.gov (United States)

    Chang, Jiaqing; Liu, Yaxin; Huang, Bo

    2017-07-01

    In inkjet applications, it is normal to search for an optimal drive waveform when dispensing a fresh fluid or adjusting a newly fabricated print-head. To test trial waveforms with different dwell times, a camera and a strobe light were used to image the protruding or retracting liquid tongues without ejecting any droplets. An edge detection method was used to calculate the lengths of the liquid tongues to draw the meniscus movement curves. The meniscus movement is determined by the time-domain response of the acoustic pressure at the nozzle of the print-head. Starting at the inverse piezoelectric effect, a mathematical model which considers the liquid viscosity in acoustic propagation is constructed to study the acoustic pressure response at the nozzle of the print-head. The liquid viscosity retards the propagation speed and dampens the harmonic amplitude. The pressure response, which is the combined effect of the acoustic pressures generated during the rising time and the falling time and after their propagations and reflections, explains the meniscus movements well. Finally, the optimal dwell time for droplet ejections is discussed.

  7. Effects of dwell time of excitation waveform on meniscus movements for a tubular piezoelectric print-head: experiments and model

    International Nuclear Information System (INIS)

    Chang, Jiaqing; Liu, Yaxin; Huang, Bo

    2017-01-01

    In inkjet applications, it is normal to search for an optimal drive waveform when dispensing a fresh fluid or adjusting a newly fabricated print-head. To test trial waveforms with different dwell times, a camera and a strobe light were used to image the protruding or retracting liquid tongues without ejecting any droplets. An edge detection method was used to calculate the lengths of the liquid tongues to draw the meniscus movement curves. The meniscus movement is determined by the time-domain response of the acoustic pressure at the nozzle of the print-head. Starting at the inverse piezoelectric effect, a mathematical model which considers the liquid viscosity in acoustic propagation is constructed to study the acoustic pressure response at the nozzle of the print-head. The liquid viscosity retards the propagation speed and dampens the harmonic amplitude. The pressure response, which is the combined effect of the acoustic pressures generated during the rising time and the falling time and after their propagations and reflections, explains the meniscus movements well. Finally, the optimal dwell time for droplet ejections is discussed. (paper)

  8. Holonic Modelling of Large Scale Geographic Environments

    Science.gov (United States)

    Mekni, Mehdi; Moulin, Bernard

    In this paper, we propose a novel approach to model Virtual Geographic Environments (VGE) which uses the holonic approach as a computational geographic methodology and holarchy as organizational principle. Our approach allows to automatically build VGE using data provided by Geographic Information Systems (GIS) and enables an explicit representation of the geographic environment for Situated Multi-Agent Systems (SMAS) in which agents are situated and with which they interact. In order to take into account geometric, topologic, and semantic characteristics of the geographic environment, we propose the use of the holonic approach to build the environment holarchy. We illustrate our holonic model using two different environments: an urban environment and a natural environment.

  9. Advances in Modelling of Large Scale Coastal Evolution

    NARCIS (Netherlands)

    Stive, M.J.F.; De Vriend, H.J.

    1995-01-01

    The attention for climate change impact on the world's coastlines has established large scale coastal evolution as a topic of wide interest. Some more recent advances in this field, focusing on the potential of mathematical models for the prediction of large scale coastal evolution, are discussed.

  10. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension of the para......Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...... of the parameter vector. A new design matrix free algorithm is proposed for computing the penalized maximum likelihood estimate for GLAMs, which, in particular, handles nondifferentiable penalty functions. The proposed algorithm is implemented and available via the R package glamlasso. It combines several ideas...

  11. Model for large scale circulation of nuclides in nature, 1

    Energy Technology Data Exchange (ETDEWEB)

    Ohnishi, Teruaki

    1988-12-01

    A model for large scale circulation of nuclides was developed, and a computer code named COCAIN was made which simulates this circulation system-dynamically. The natural environment considered in the present paper consists of 2 atmospheres, 8 geospheres and 2 lithospheres. The biosphere is composed of 4 types of edible plants, 5 cattles and their products, 4 water biota and 16 human organs. The biosphere is assumed to be given nuclides from the natural environment mentioned above. With the use of COCAIN, two numerical case studies were carried out; the one is the study on nuclear pollution in nature by the radioactive nuclides originating from the past nuclear bomb tests, and the other is the study on the response of environment and biota to the pulse injection of nuclides into one compartment. From the former case study it was verified that this model can well explain the observation and properly simulate the large scale circulation of nuclides in nature.

  12. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  13. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  14. A Modeling & Simulation Implementation Framework for Large-Scale Simulation

    Directory of Open Access Journals (Sweden)

    Song Xiao

    2012-10-01

    Full Text Available Classical High Level Architecture (HLA systems are facing development problems for lack of supporting fine-grained component integration and interoperation in large-scale complex simulation applications. To provide efficient methods of this issue, an extensible, reusable and composable simulation framework is proposed. To promote the reusability from coarse-grained federate to fine-grained components, this paper proposes a modelling & simulation framework which consists of component-based architecture, modelling methods, and simulation services to support and simplify the process of complex simulation application construction. Moreover, a standard process and simulation tools are developed to ensure the rapid and effective development of simulation application.

  15. Ecohydrological modeling for large-scale environmental impact assessment.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  17. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  18. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  19. Protein homology model refinement by large-scale energy optimization.

    Science.gov (United States)

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  20. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  1. Environmental Impacts of Large Scale Biochar Application Through Spatial Modeling

    Science.gov (United States)

    Huber, I.; Archontoulis, S.

    2017-12-01

    In an effort to study the environmental (emissions, soil quality) and production (yield) impacts of biochar application at regional scales we coupled the APSIM-Biochar model with the pSIMS parallel platform. So far the majority of biochar research has been concentrated on lab to field studies to advance scientific knowledge. Regional scale assessments are highly needed to assist decision making. The overall objective of this simulation study was to identify areas in the USA that have the most gain environmentally from biochar's application, as well as areas which our model predicts a notable yield increase due to the addition of biochar. We present the modifications in both APSIM biochar and pSIMS components that were necessary to facilitate these large scale model runs across several regions in the United States at a resolution of 5 arcminutes. This study uses the AgMERRA global climate data set (1980-2010) and the Global Soil Dataset for Earth Systems modeling as a basis for creating its simulations, as well as local management operations for maize and soybean cropping systems and different biochar application rates. The regional scale simulation analysis is in progress. Preliminary results showed that the model predicts that high quality soils (particularly those common to Iowa cropping systems) do not receive much, if any, production benefit from biochar. However, soils with low soil organic matter ( 0.5%) do get a noteworthy yield increase of around 5-10% in the best cases. We also found N2O emissions to be spatial and temporal specific; increase in some areas and decrease in some other areas due to biochar application. In contrast, we found increases in soil organic carbon and plant available water in all soils (top 30 cm) due to biochar application. The magnitude of these increases (% change from the control) were larger in soil with low organic matter (below 1.5%) and smaller in soils with high organic matter (above 3%) and also dependent on biochar

  2. Symmetry in stochasticity: Random walk models of large-scale ...

    Indian Academy of Sciences (India)

    This paper describes the insights gained from the excursion set approach, in which various questions about the phenomenology of large-scale structure formation can be mapped to problems associated with the first crossing distribution of appropriately defined barriers by random walks. Much of this is summarized in R K ...

  3. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  4. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  5. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    Second generation biorefineries transform agricultural wastes into biochemicals with higher added value, e.g. bioethanol, which is thought to become a primary component in liquid fuels [1]. Extensive endeavors have been conducted to make the production process feasible on a large scale, and recen......Second generation biorefineries transform agricultural wastes into biochemicals with higher added value, e.g. bioethanol, which is thought to become a primary component in liquid fuels [1]. Extensive endeavors have been conducted to make the production process feasible on a large scale......-time monitoring. The Inbicon biorefinery converts wheat straw into bioethanol utilizing steam, enzymes, and genetically modified yeast. The biomass is first pretreated in a steam pressurized and continuous thermal reactor where lignin is relocated, and hemicellulose partially hydrolyzed such that cellulose...... becomes more accessible to enzymes. The biorefinery is integrated with a nearby power plant following the Integrated Biomass Utilization System (IBUS) principle for reducing steam costs [4]. During the pretreatment, by-products are also created such as organic acids, furfural, and pseudo-lignin, which act...

  6. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  7. Modeling the spreading of large-scale wildland fires

    Science.gov (United States)

    Mohamed Drissi

    2015-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning...

  8. Long-Run Properties of Large-Scale Macroeconometric Models

    OpenAIRE

    Kenneth F. WALLIS-; John D. WHITLEY

    1987-01-01

    We consider alternative approaches to the evaluation of the long-run properties of dynamic nonlinear macroeconometric models, namely dynamic simulation over an extended database, or the construction and direct solution of the steady-state version of the model. An application to a small model of the UK economy is presented. The model is found to be unstable, but a stable form can be produced by simple alterations to the structure.

  9. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  10. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper

    2003-01-01

    for improvement of the reliability of physical model results. This paper demonstrates by examples that numerical modelling benefits in various ways from experimental studies (in large and small laboratory facilities). The examples range from very general hydrodynamic descriptions of wave phenomena to specific......Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...... hydrodynamic interaction with structures. The examples also show that numerical model development benefits from international co-operation and sharing of high quality results....

  11. Misspecified poisson regression models for large-scale registry data

    DEFF Research Database (Denmark)

    Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.

    2016-01-01

    working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...

  12. Modelling expected train passenger delays on large scale railway networks

    DEFF Research Database (Denmark)

    Landex, Alex; Nielsen, Otto Anker

    2006-01-01

    Forecasts of regularity for railway systems have traditionally – if at all – been computed for trains, not for passengers. Relatively recently it has become possible to model and evaluate the actual passenger delays by a passenger regularity model for the operation already carried out. First...

  13. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  14. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......-perceptions. It is the commonly adopted assumption that the distributed elements follow unbounded distributions which induces the need to enumerate all paths in the SUE, no matter how unattractive they might be. The Deterministic User Equilibrium (DUE), on the other hand, has a built-in criterion distinguishing definitely unused...... non-universal choice sets and (ii) flow distribution according to random utility maximisation theory. One model allows distinction between used and unused routes based on the distribution of the random error terms, while the other model allows this distinction by posing restrictions on the costs...

  15. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the

  16. Geometric algorithms for electromagnetic modeling of large scale structures

    Science.gov (United States)

    Pingenot, James

    With the rapid increase in the speed and complexity of integrated circuit designs, 3D full wave and time domain simulation of chip, package, and board systems becomes more and more important for the engineering of modern designs. Much effort has been applied to the problem of electromagnetic (EM) simulation of such systems in recent years. Major advances in boundary element EM simulations have led to O(n log n) simulations using iterative methods and advanced Fast. Fourier Transform (FFT), Multi-Level Fast Multi-pole Methods (MLFMM), and low-rank matrix compression techniques. These advances have been augmented with an explosion of multi-core and distributed computing technologies, however, realization of the full scale of these capabilities has been hindered by cumbersome and inefficient geometric processing. Anecdotal evidence from industry suggests that users may spend around 80% of turn-around time manipulating the geometric model and mesh. This dissertation addresses this problem by developing fast and efficient data structures and algorithms for 3D modeling of chips, packages, and boards. The methods proposed here harness the regular, layered 2D nature of the models (often referred to as "2.5D") to optimize these systems for large geometries. First, an architecture is developed for efficient storage and manipulation of 2.5D models. The architecture gives special attention to native representation of structures across various input models and special issues particular to 3D modeling. The 2.5D structure is then used to optimize the mesh systems First, circuit/EM co-simulation techniques are extended to provide electrical connectivity between objects. This concept is used to connect independently meshed layers, allowing simple and efficient 2D mesh algorithms to be used in creating a 3D mesh. Here, adaptive meshing is used to ensure that the mesh accurately models the physical unknowns (current and charge). Utilizing the regularized nature of 2.5D objects and

  17. Simulation of large-scale rule-based models

    Energy Technology Data Exchange (ETDEWEB)

    Hlavacek, William S [Los Alamos National Laboratory; Monnie, Michael I [Los Alamos National Laboratory; Colvin, Joshua [NON LANL; Faseder, James [NON LANL

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  18. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    , which allows us to couple and explore different models and sampling procedures in runtime, still being applied to full-sized data. Using the implemented tools, we demonstrate that the models successfully can be applied for clustering whole-brain connectivity networks. Without being informed of spatial......The human brain constitutes an impressive network formed by the structural and functional connectivity patterns between billions of neurons. Modern functional and diffusion magnetic resonance imaging (fMRI and dMRI) provides unprecedented opportunities for exploring the functional and structural...... organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...

  19. Modeling and simulation of large scale stirred tank

    Science.gov (United States)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  20. Modelling large scale human activity in San Francisco

    Science.gov (United States)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  1. Energy-aware semantic modeling in large scale infrastructures

    NARCIS (Netherlands)

    Zhu, H.; van der Veldt, K.; Grosso, P.; Zhao, Z.; Liao, X.; de Laat, C.

    2012-01-01

    Including the energy profile of the computing infrastructure in the decision process for scheduling computing tasks and allocating resources is essential to improve the system energy efficiency. However, the lack of an effective model of the infrastructure energy information makes it difficult for

  2. Symmetry-guided large-scale shell-model theory

    Czech Academy of Sciences Publication Activity Database

    Launey, K. D.; Dytrych, Tomáš; Draayer, J. P.

    2016-01-01

    Roč. 89, JUL (2016), s. 101-136 ISSN 0146-6410 R&D Projects: GA ČR GA16-16772S Institutional support: RVO:61389005 Keywords : Ab intio shell-model theory * Symplectic symmetry * Collectivity * Clusters * Hoyle state * Orderly patterns in nuclei from first principles Subject RIV: BE - Theoretical Physics Impact factor: 11.229, year: 2016

  3. Parameterization of Fire Injection Height in Large Scale Transport Model

    Science.gov (United States)

    Paugam, R.; Wooster, M.; Atherton, J.; Val Martin, M.; Freitas, S.; Kaiser, J. W.; Schultz, M. G.

    2012-12-01

    The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modeled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified, in particular we added (i) an equation for mass conservation, (ii) a scheme to parameterize horizontal entrainment/detrainment, and (iii) a new initialization module which estimates the sensible heat released by the fire on the basis of measured FRP rather than fuel cover type. FRP and Active Fire (AF) area necessary for the initialization of the model are directly derived from a modified version of the Dozier algorithm applied to the MOD14 product. An optimization (using the simulating annealing method) of this new version of the PRM is then proposed based on fire plume characteristics derived from the official MISR plume height project and atmospheric profiles extracted from the ECMWF analysis. The data set covers the main fire region (Africa, Siberia, Indonesia, and North and South America) and is set up to (i) retain fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact), (ii) keep fire which show decrease of FRP and AF area after MISR overpass (i.e. to minimize effect of the time period needed for the plume to

  4. GIS for large-scale watershed observational data model

    Science.gov (United States)

    Patino-Gomez, Carlos

    Because integrated management of a river basin requires the development of models that are used for many purposes, e.g., to assess risks and possible mitigation of droughts and floods, manage water rights, assess water quality, and simply to understand the hydrology of the basin, the development of a relational database from which models can access the various data needed to describe the systems being modeled is fundamental. In order for this concept to be useful and widely applicable, however, it must have a standard design. The recently developed ArcHydro data model facilitates the organization of data according to the "basin" principle and allows access to hydrologic information by models. The development of a basin-scale relational database for the Rio Grande/Bravo basin implemented in a Geographic Information System is one of the contributions of this research. This geodatabase represents the first major attempt to establish a more complete understanding of the basin as a whole, including spatial and temporal information obtained from the United States of America and Mexico. Difficulties in processing raster datasets over large regions are studied in this research. One of the most important contributions is the application of a Raster-Network Regionalization technique, which utilizes raster-based analysis at the subregional scale in an efficient manner and combines the resulting subregional vector datasets into a regional database. Another important contribution of this research is focused on implementing a robust structure for handling huge temporal data sets related to monitoring points such as hydrometric and climatic stations, reservoir inlets and outlets, water rights, etc. For the Rio Grande study area, the ArcHydro format is applied to the historical information collected in order to include and relate these time series to the monitoring points in the geodatabase. Its standard time series format is changed to include a relationship to the agency from

  5. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  6. Soil carbon management in large-scale Earth system modelling

    DEFF Research Database (Denmark)

    Olin, S.; Lindeskog, M.; Pugh, T. A. M.

    2015-01-01

    Croplands are vital ecosystems for human well-being and provide important ecosystem services such as crop yields, retention of nitrogen and carbon storage. On large (regional to global)-scale levels, assessment of how these different services will vary in space and time, especially in response......, carbon sequestration and nitrogen leaching from croplands are evaluated and discussed. Compared to the version of LPJ-GUESS that does not include land-use dynamics, estimates of soil carbon stocks and nitrogen leaching from terrestrial to aquatic ecosystems were improved. Our model experiments allow us...... modelling C–N interactions in agricultural ecosystems under future environmental change and the effects these have on terrestrial biogeochemical cycles....

  7. Multistability in Large Scale Models of Brain Activity.

    Directory of Open Access Journals (Sweden)

    Mathieu Golos

    2015-12-01

    Full Text Available Noise driven exploration of a brain network's dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network's capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain's dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system's attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i a uniform activation threshold or (ii a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the "resting state" condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors.

  8. Improving large-scale groundwater models by considering fossil gradients

    Science.gov (United States)

    Schulz, Stephan; Walther, Marc; Michelsen, Nils; Rausch, Randolf; Dirks, Heiko; Al-Saud, Mohammed; Merz, Ralf; Kolditz, Olaf; Schüth, Christoph

    2017-05-01

    Due to limited availability of surface water, many arid to semi-arid countries rely on their groundwater resources. Despite the quasi-absence of present day replenishment, some of these groundwater bodies contain large amounts of water, which was recharged during pluvial periods of the Late Pleistocene to Early Holocene. These mostly fossil, non-renewable resources require different management schemes compared to those which are usually applied in renewable systems. Fossil groundwater is a finite resource and its withdrawal implies mining of aquifer storage reserves. Although they receive almost no recharge, some of them show notable hydraulic gradients and a flow towards their discharge areas, even without pumping. As a result, these systems have more discharge than recharge and hence are not in steady state, which makes their modelling, in particular the calibration, very challenging. In this study, we introduce a new calibration approach, composed of four steps: (i) estimating the fossil discharge component, (ii) determining the origin of fossil discharge, (iii) fitting the hydraulic conductivity with a pseudo steady-state model, and (iv) fitting the storage capacity with a transient model by reconstructing head drawdown induced by pumping activities. Finally, we test the relevance of our approach and evaluated the effect of considering or ignoring fossil gradients on aquifer parameterization for the Upper Mega Aquifer (UMA) on the Arabian Peninsula.

  9. Nonlinear Synapses for Large-Scale Models: An Efficient Representation Enables Complex Synapse Dynamics Modeling in Large-Scale Simulations

    Directory of Open Access Journals (Sweden)

    Eric eHu

    2015-09-01

    Full Text Available Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  10. Highly efficient model updating for structural condition assessment of large-scale bridges.

    Science.gov (United States)

    2015-02-01

    For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...

  11. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  12. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  13. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  14. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    time, especially with respect to large-scale transport models. The study described in this paper contributes to fill the gap by investigating the effects of uncertainty in socio-economic variables growth rate projections on large-scale transport model forecasts, using the Danish National Transport......A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  15. Analytical model of the statistical properties of contrast of large-scale ionospheric inhomogeneities.

    Science.gov (United States)

    Vsekhsvyatskaya, I. S.; Evstratova, E. A.; Kalinin, Yu. K.; Romanchuk, A. A.

    1989-08-01

    A new analytical model is proposed for the distribution of variations of the relative electron-density contrast of large-scale ionospheric inhomogeneities. The model is characterized by other-than-zero skewness and kurtosis. It is shown that the model is applicable in the interval of horizontal dimensions of inhomogeneities from hundreds to thousands of kilometers.

  16. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  17. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    Science.gov (United States)

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  18. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  19. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  20. Analysis and Modelling of Pedestrian Movement Dynamics at Large-scale Events

    NARCIS (Netherlands)

    Duives, D.C.

    2016-01-01

    To what extent can we model the movements of pedestrians who walk across a large-scale event terrain? This dissertation answers this question by analysing the operational movement dynamics of pedestrians in crowds at several large music and sport events in the Netherlands and extracting the key

  1. Formation and disruption of tonotopy in a large-scale model of the auditory cortex

    Czech Academy of Sciences Publication Activity Database

    Tomková, M.; Tomek, J.; Novák, Ondřej; Zelenka, Ondřej; Syka, Josef; Brom, C.

    2015-01-01

    Roč. 39, č. 2 (2015), s. 131-153 ISSN 0929-5313 R&D Projects: GA ČR(CZ) GAP303/12/1347 Institutional support: RVO:68378041 Keywords : auditory cortex * large-scale model * spiking neuron * oscillation * STDP * tonotopy Subject RIV: FH - Neurology Impact factor: 1.871, year: 2015

  2. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  3. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen

    2007-01-01

    is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling......Grate furnaces are currently a main workhorse in large-scale firing of biomass for heat and power production. A biomass grate fired furnace can be interpreted as a cross-flow reactor, where biomass is fed in a thick layer perpendicular to the primary air flow. The bottom of the biomass bed...... is exposed to preheated inlet air while the top of the bed resides within the furnace. Mathematical modeling is an efficient way to understand and improve the operation and design of combustion systems. Compared to modeling of pulverized fuel furnaces, CFD modeling of biomass-fired grate furnaces...

  4. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  5. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  6. A testing facility for large scale models at 100 bar and 3000C to 10000C

    International Nuclear Information System (INIS)

    Zemann, H.

    1978-07-01

    A testing facility for large scale model tests is in construction under support of the Austrian Industry. It will contain a Prestressed Concrete Pressure Vessel (PCPV) with hot linear (300 0 C at 100 bar), an electrical heating system (1.2 MW, 1000 0 C), a gas supply system, and a cooling system for the testing space. The components themselves are models for advanced high temperature applications. The first main component which was tested successfully was the PCPV. Basic investigation of the building materials, improvements of concrete gauges, large scale model tests and measurements within the structural concrete and on the liner from the beginning of construction during the period of prestressing, the period of stabilization and the final pressurizing tests have been made. On the basis of these investigations a computer controlled safety surveillance system for long term high pressure, high temperature tests has been developed. (author)

  7. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  8. A Regression Algorithm for Model Reduction of Large-Scale Multi-Dimensional Problems

    Science.gov (United States)

    Rasekh, Ehsan

    2011-11-01

    Model reduction is an approach for fast and cost-efficient modelling of large-scale systems governed by Ordinary Differential Equations (ODEs). Multi-dimensional model reduction has been suggested for reduction of the linear systems simultaneously with respect to frequency and any other parameter of interest. Multi-dimensional model reduction is also used to reduce the weakly nonlinear systems based on Volterra theory. Multiple dimensions degrade the efficiency of reduction by increasing the size of the projection matrix. In this paper a new methodology is proposed to efficiently build the reduced model based on regression analysis. A numerical example confirms the validity of the proposed regression algorithm for model reduction.

  9. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    Energy Technology Data Exchange (ETDEWEB)

    RODRIGUEZ, MARKO A. [Los Alamos National Laboratory; BOLLEN, JOHAN [Los Alamos National Laboratory; VAN DE SOMPEL, HERBERT [Los Alamos National Laboratory

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  10. Dynamic model of frequency control in Danish power system with large scale integration of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2013-01-01

    This work evaluates the impact of large scale integration of wind power in future power systems when 50% of load demand can be met from wind power. The focus is on active power balance control, where the main source of power imbalance is an inaccurate wind speed forecast. In this study, a Danish...... power system model with large scale of wind power is developed and a case study for an inaccurate wind power forecast is investigated. The goal of this work is to develop an adequate power system model that depicts relevant dynamic features of the power plants and compensates for load generation...... imbalances, caused by inaccurate wind speed forecast, by an appropriate control of the active power production from power plants....

  11. Model Predictive Control for Flexible Power Consumption of Large-Scale Refrigeration Systems

    DEFF Research Database (Denmark)

    Shafiei, Seyed Ehsan; Stoustrup, Jakob; Rasmussen, Henrik

    2014-01-01

    A model predictive control (MPC) scheme is introduced to directly control the electrical power consumption of large-scale refrigeration systems. Deviation from the baseline of the consumption is corresponded to the storing and delivering of thermal energy. By virtue of such correspondence......, the control method can be employed for regulating power services in the smart grid. The proposed scheme contains the control of cooling capacity as well as optimizing the efficiency factor of the system, which is in general a nonconvex optimization problem. By introducing a fictitious manipulated variable......, and novel incorporation of the evaporation temperature set-point into optimization problem, the convex optimization problem is formulated within the MPC scheme. The method is applied to a simulation benchmark of large-scale refrigeration systems including several medium and low temperature cold reservoirs....

  12. Large-scale building energy efficiency retrofit: Concept, model and control

    International Nuclear Information System (INIS)

    Wu, Zhou; Wang, Bo; Xia, Xiaohua

    2016-01-01

    BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.

  13. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  14. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    Science.gov (United States)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat

  15. Large-scale multi-configuration electromagnetic induction: a promising tool to improve hydrological models

    Science.gov (United States)

    von Hebel, Christian; Rudolph, Sebastian; Mester, Achim; Huisman, Johan A.; Montzka, Carsten; Weihermüller, Lutz; Vereecken, Harry; van der Kruk, Jan

    2015-04-01

    Large-scale multi-configuration electromagnetic induction (EMI) use different coil configurations, i.e., coil offsets and coil orientations, to sense coil specific depth volumes. The obtained apparent electrical conductivity (ECa) maps can be related to some soil properties such as clay content, soil water content, and pore water conductivity, which are important characteristics that influence hydrological processes. Here, we use large-scale EMI measurements to investigate changes in soil texture that drive the available water supply causing crop development patterns that were observed in leaf area index (LAI) maps obtained from RapidEye satellite images taken after a drought period. The 20 ha test site is situated within the Ellebach catchment (Germany) and consists of a sand-and-gravel dominated upper terrace (UT) and a loamy lower terrace (LT). The large-scale multi-configuration EMI measurements were calibrated using electrical resistivity tomography (ERT) measurements at selected transects and soil samples were taken at representative locations where changes in the electrical conductivity were observed and therefore changing soil properties were expected. By analyzing all the data, the observed LAI patterns could be attributed to buried paleo-river channel systems that contained a higher silt and clay content and provided a higher water holding capacity than the surrounding coarser material. Moreover, the measured EMI data showed highest correlation with LAI for the deepest sensing coil offset (up to 1.9 m), which indicates that the deeper subsoil is responsible for root water uptake especially under drought conditions. To obtain a layered subsurface electrical conductivity model that shows the subsurface structures more clearly, a novel EMI inversion scheme was applied to the field data. The obtained electrical conductivity distributions were validated with soil probes and ERT transects that confirmed the inverted lateral and vertical large-scale electrical

  16. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  17. A cooperative strategy for parameter estimation in large scale systems biology models.

    Science.gov (United States)

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and

  18. A cooperative strategy for parameter estimation in large scale systems biology models

    Directory of Open Access Journals (Sweden)

    Villaverde Alejandro F

    2012-06-01

    Full Text Available Abstract Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS, is presented. Its key feature is the cooperation between different programs (“threads” that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS. Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here

  19. A Novel Iterative and Dynamic Trust Computing Model for Large Scaled P2P Networks

    Directory of Open Access Journals (Sweden)

    Zhenhua Tan

    2016-01-01

    Full Text Available Trust management has been emerging as an essential complementary part to security mechanisms of P2P systems, and trustworthiness is one of the most important concepts driving decision making and establishing reliable relationships. Collusion attack is a main challenge to distributed P2P trust model. Large scaled P2P systems have typical features, such as large scaled data with rapid speed, and this paper presented an iterative and dynamic trust computation model named IDTrust (Iterative and Dynamic Trust model according to these properties. First of all, a three-layered distributed trust communication architecture was presented in IDTrust so as to separate evidence collector and trust decision from P2P service. Then an iterative and dynamic trust computation method was presented to improve efficiency, where only latest evidences were enrolled during one iterative computation. On the basis of these, direct trust model, indirect trust model, and global trust model were presented with both explicit and implicit evidences. We consider multifactors in IDTrust model according to different malicious behaviors, such as similarity, successful transaction rate, and time decay factors. Simulations and analysis proved the rightness and efficiency of IDTrust against attacks with quick respond and sensitiveness during trust decision.

  20. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  1. Comparing large-scale computational approaches to epidemic modeling: agent-based versus structured metapopulation models.

    Science.gov (United States)

    Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro

    2010-06-29

    In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are

  2. Model Selection and Hypothesis Testing for Large-Scale Network Models with Overlapping Groups

    Directory of Open Access Journals (Sweden)

    Tiago P. Peixoto

    2015-03-01

    Full Text Available The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties are often an essential ingredient of network formation.

  3. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  4. Scheduling of power generation a large-scale mixed-variable model

    CERN Document Server

    Prékopa, András; Strazicky, Beáta; Deák, István; Hoffer, János; Németh, Ágoston; Potecz, Béla

    2014-01-01

    The book contains description of a real life application of modern mathematical optimization tools in an important problem solution for power networks. The objective is the modelling and calculation of optimal daily scheduling of power generation, by thermal power plants,  to satisfy all demands at minimum cost, in such a way that the  generation and transmission capacities as well as the demands at the nodes of the system appear in an integrated form. The physical parameters of the network are also taken into account. The obtained large-scale mixed variable problem is relaxed in a smart, practical way, to allow for fast numerical solution of the problem.

  5. Efficient matrix-vector products for large-scale nuclear Shell-Model calculations

    OpenAIRE

    Toivanen, J.

    2006-01-01

    A method to accelerate the matrix-vector products of j-scheme nuclear Shell-Model Configuration Interaction (SMCI) calculations is presented. The method takes advantage of the matrix product form of the j-scheme proton-neutron Hamiltonian matrix. It is shown that the method can speed up unrestricted large-scale pf-shell calculations by up to two orders of magnitude compared to previously existing related j-scheme method. The new method allows unrestricted SMCI calculations up to j-scheme dime...

  6. Operation Modeling of Power Systems Integrated with Large-Scale New Energy Power Sources

    Directory of Open Access Journals (Sweden)

    Hui Li

    2016-10-01

    Full Text Available In the most current methods of probabilistic power system production simulation, the output characteristics of new energy power generation (NEPG has not been comprehensively considered. In this paper, the power output characteristics of wind power generation and photovoltaic power generation are firstly analyzed based on statistical methods according to their historical operating data. Then the characteristic indexes and the filtering principle of the NEPG historical output scenarios are introduced with the confidence level, and the calculation model of NEPG’s credible capacity is proposed. Based on this, taking the minimum production costs or the best energy-saving and emission-reduction effect as the optimization objective, the power system operation model with large-scale integration of new energy power generation (NEPG is established considering the power balance, the electricity balance and the peak balance. Besides, the constraints of the operating characteristics of different power generation types, the maintenance schedule, the load reservation, the emergency reservation, the water abandonment and the transmitting capacity between different areas are also considered. With the proposed power system operation model, the operation simulations are carried out based on the actual Northwest power grid of China, which resolves the new energy power accommodations considering different system operating conditions. The simulation results well verify the validity of the proposed power system operation model in the accommodation analysis for the power system which is penetrated with large scale NEPG.

  7. Multi-Period Dynamic Optimization for Large-Scale Differential-Algebraic Process Models under Uncertainty

    Directory of Open Access Journals (Sweden)

    Ian D. Washington

    2015-07-01

    Full Text Available A technique for optimizing large-scale differential-algebraic process models under uncertainty using a parallel embedded model approach is developed in this article. A combined multi-period multiple-shooting discretization scheme is proposed, which creates a significant number of independent numerical integration tasks for each shooting interval over all scenario/period realizations. Each independent integration task is able to be solved in parallel as part of the function evaluations within a gradient-based non-linear programming solver. The focus of this paper is on demonstrating potential computation performance improvement when the embedded differential-algebraic equation model solution of the multi-period discretization is implemented in parallel. We assess our parallel dynamic optimization approach on two case studies; the first is a benchmark literature problem, while the second is a large-scale air separation problem that considers a robust set-point transition under parametric uncertainty. Results indicate that focusing on the speed-up of the embedded model evaluation can significantly decrease the overall computation time; however, as the multi-period formulation grows with increased realizations, the computational burden quickly shifts to the internal computation performed within the non-linear programming algorithm. This highlights the need for further decomposition, structure exploitation and parallelization within the non-linear programming algorithm and is the subject for further investigation.

  8. Sensitivity analysis of key components in large-scale hydroeconomic models

    Science.gov (United States)

    Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.

    2008-12-01

    This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.

  9. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  10. Functional models for large-scale gene regulation networks: realism and fiction.

    Science.gov (United States)

    Lagomarsino, Marco Cosentino; Bassetti, Bruno; Castellani, Gastone; Remondini, Daniel

    2009-04-01

    High-throughput experiments are shedding light on the topology of large regulatory networks and at the same time their functional states, namely the states of activation of the nodes (for example transcript or protein levels) in different conditions, times, environments. We now possess a certain amount of information about these two levels of description, stored in libraries, databases and ontologies. A current challenge is to bridge the gap between topology and function, i.e. developing quantitative models aimed at characterizing the expression patterns of large sets of genes. However, approaches that work well for small networks become impossible to master at large scales, mainly because parameters proliferate. In this review we discuss the state of the art of large-scale functional network models, addressing the issue of what can be considered as "realistic" and what the main limitations may be. We also show some directions for future work, trying to set the goals that future models should try to achieve. Finally, we will emphasize the possible benefits in the understanding of biological mechanisms underlying complex multifactorial diseases, and in the development of novel strategies for the description and the treatment of such pathologies.

  11. Management and services for large-scale virtual 3D urban model data based on network

    Science.gov (United States)

    He, Zhengwei; Chen, Jing; Wu, Huayi

    2008-10-01

    The buildings in modern city are complex and diverse, and the quantity is huge. These bring very big challenge for constructing 3D GIS under network circumstance and eventually realizing the Digital Earth. After analyzed the characteristic of network service about massive 3D urban building model data, this paper focuses on the organization and management of spatial data and the network services strategy, proposes a progressive network transmission schema based on the spatial resolution and the component elements of 3D building model data. Next, this paper put forward multistage-link three-dimensional spatial data organization model and encoding method of spatial index based on fully level quadtree structure. Then, a virtual earth platform, called GeoGlobe, was developed using above theory. Experimental results show that above 3D spatial data management model and service theory can availably provide network services for large-scale 3D urban model data. The application results and user experience good .

  12. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen

    2007-01-01

    is exposed to preheated inlet air while the top of the bed resides within the furnace. Mathematical modeling is an efficient way to understand and improve the operation and design of combustion systems. Compared to modeling of pulverized fuel furnaces, CFD modeling of biomass-fired grate furnaces...... is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...... quite much with the conditions in the real furnace. Combustion instabilities in the fuel bed impose big challenges to give reliable grate inlet BCs for the CFD modeling; the deposits formed on furnace walls and air nozzles make it difficult to define precisely the wall BCs and air jet BCs...

  13. Unified Tractable Model for Large-Scale Networks Using Stochastic Geometry: Analysis and Design

    KAUST Repository

    Afify, Laila H.

    2016-12-01

    The ever-growing demands for wireless technologies necessitate the evolution of next generation wireless networks that fulfill the diverse wireless users requirements. However, upscaling existing wireless networks implies upscaling an intrinsic component in the wireless domain; the aggregate network interference. Being the main performance limiting factor, it becomes crucial to develop a rigorous analytical framework to accurately characterize the out-of-cell interference, to reap the benefits of emerging networks. Due to the different network setups and key performance indicators, it is essential to conduct a comprehensive study that unifies the various network configurations together with the different tangible performance metrics. In that regard, the focus of this thesis is to present a unified mathematical paradigm, based on Stochastic Geometry, for large-scale networks with different antenna/network configurations. By exploiting such a unified study, we propose an efficient automated network design strategy to satisfy the desired network objectives. First, this thesis studies the exact aggregate network interference characterization, by accounting for each of the interferers signals in the large-scale network. Second, we show that the information about the interferers symbols can be approximated via the Gaussian signaling approach. The developed mathematical model presents twofold analysis unification for uplink and downlink cellular networks literature. It aligns the tangible decoding error probability analysis with the abstract outage probability and ergodic rate analysis. Furthermore, it unifies the analysis for different antenna configurations, i.e., various multiple-input multiple-output (MIMO) systems. Accordingly, we propose a novel reliable network design strategy that is capable of appropriately adjusting the network parameters to meet desired design criteria. In addition, we discuss the diversity-multiplexing tradeoffs imposed by differently favored

  14. LASSIE: simulating large-scale models of biochemical systems on GPUs.

    Science.gov (United States)

    Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo

    2017-05-10

    Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE

  15. Large scale inference in the Infinite Relational Model: Gibbs sampling is not enough

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon; Moth, Andreas Leon Aagard; Mørup, Morten

    2013-01-01

    The stochastic block-model and its non-parametric extension, the Infinite Relational Model (IRM), have become key tools for discovering group-structure in complex networks. Identifying these groups is a combinatorial inference problem which is usually solved by Gibbs sampling. However, whether...... Gibbs sampling suffices and can be scaled to the modeling of large scale real world complex networks has not been examined sufficiently. In this paper we evaluate the performance and mixing ability of Gibbs sampling in the Infinite Relational Model (IRM) by implementing a high performance Gibbs sampler....... We find that Gibbs sampling can be computationally scaled to handle millions of nodes and billions of links. Investigating the behavior of the Gibbs sampler for different sizes of networks we find that the mixing ability decreases drastically with the network size, clearly indicating a need...

  16. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  17. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.

    Science.gov (United States)

    Jensen, Tue V; Pinson, Pierre

    2017-11-28

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  18. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    Science.gov (United States)

    Jensen, Tue V.; Pinson, Pierre

    2017-11-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  19. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    DEFF Research Database (Denmark)

    Jensen, Tue Vissing; Pinson, Pierre

    2017-01-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured(R2.8) in detailby the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system......, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather......-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years.(R2.9) These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system.(R2.10) The spatial coverage, completeness and resolution of this dataset, open the door...

  20. Procedural Generation of Large-Scale Forests Using a Graph-Based Neutral Landscape Model

    Directory of Open Access Journals (Sweden)

    Jiaqi Li

    2018-03-01

    Full Text Available Specifying the positions and attributes of plants (e.g., species, size, and height during the procedural generation of large-scale forests in virtual geographic environments is challenging, especially when reflecting the characteristics of vegetation distributions. To address this issue, a novel graph-based neutral landscape model (NLM is proposed to generate forest landscapes with varying compositions and configurations. Our model integrates a set of class-level landscape metrics and generates more realistic and variable landscapes compared with existing NLMs controlled by limited global-level landscape metrics. To produce patches with particular sizes and shapes, a region adjacency graph is transformed from a cluster map that is generated based upon percolation theory; subsequently, optimal neighboring nodes in the graph are merged under restricted growth conditions from a source node. The locations of seeds are randomly placed and their species are classified according to the generated forest landscapes to obtain the final tree distributions. The results demonstrate that our method can generate realistic vegetation distributions representing different spatial patterns of species with a time efficiency that satisfies the requirements for constructing large-scale virtual forests.

  1. Generative models of rich clubs in Hebbian neuronal networks and large-scale human brain networks.

    Science.gov (United States)

    Vértes, Petra E; Alexander-Bloch, Aaron; Bullmore, Edward T

    2014-10-05

    Rich clubs arise when nodes that are 'rich' in connections also form an elite, densely connected 'club'. In brain networks, rich clubs incur high physical connection costs but also appear to be especially valuable to brain function. However, little is known about the selection pressures that drive their formation. Here, we take two complementary approaches to this question: firstly we show, using generative modelling, that the emergence of rich clubs in large-scale human brain networks can be driven by an economic trade-off between connection costs and a second, competing topological term. Secondly we show, using simulated neural networks, that Hebbian learning rules also drive the emergence of rich clubs at the microscopic level, and that the prominence of these features increases with learning time. These results suggest that Hebbian learning may provide a neuronal mechanism for the selection of complex features such as rich clubs. The neural networks that we investigate are explicitly Hebbian, and we argue that the topological term in our model of large-scale brain connectivity may represent an analogous connection rule. This putative link between learning and rich clubs is also consistent with predictions that integrative aspects of brain network organization are especially important for adaptive behaviour. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, Christian [Monash Univ., Melbourne, VIC (Australia)

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  3. From Principles to Details: Integrated Framework for Architecture Modelling of Large Scale Software Systems

    Directory of Open Access Journals (Sweden)

    Andrzej Zalewski

    2013-06-01

    Full Text Available There exist numerous models of software architecture (box models, ADL’s, UML, architectural decisions, architecture modelling frameworks (views, enterprise architecture frameworks and even standards recommending practice for the architectural description. We show in this paper, that there is still a gap between these rather abstract frameworks/standards and existing architecture models. Frameworks and standards define what should be modelled rather than which models should be used and how these models are related to each other. We intend to prove that a less abstract modelling framework is needed for the effective modelling of large scale software intensive systems. It should provide a more precise guidance kinds of models to be employed and how they should relate to each other. The paper defines principles that can serve as base for an integrated model. Finally, structure of such a model has been proposed. It comprises three layers: the upper one – architectural policy – reflects corporate policy and strategies in architectural terms, the middle one –system organisation pattern – represents the core structural concepts and their rationale at a given level of scope, the lower one contains detailed architecture models. Architectural decisions play an important role here: they model the core architectural concepts explaining detailed models as well as organise the entire integrated model and the relations between its submodels.

  4. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  5. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  6. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...... mete- orological data input and taking in account the characteristics of different plants technologies and spatial distribution. An evalu- ation of the hourly forecasted energy production on a regional scale would be very valuable for the transmission system operators when making the long-term planning...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity....

  7. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel.

    Science.gov (United States)

    Yuan, Liming; Smith, Alex C

    2015-05-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect.

  8. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  9. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  10. Identifiability in N-mixture models: a large-scale screening test with bird data.

    Science.gov (United States)

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  11. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  12. Large scale hydrogeological modelling of a low-lying complex coastal aquifer system

    DEFF Research Database (Denmark)

    Meyer, Rena

    2018-01-01

    intrusion. In this thesis a new methodological approach was developed to combine 3D numerical groundwater modelling with a detailed geological description and hydrological, geochemical and geophysical data. It was applied to a regional scale saltwater intrusion in order to analyse and quantify...... the groundwater flow dynamics, identify the driving mechanisms that formed the saltwater intrusion to its present extent and to predict its progression in the future. The study area is located in the transboundary region between Southern Denmark and Northern Germany, adjacent to the Wadden Sea. Here, a large-scale...... parametrization schemes that accommodate hydrogeological heterogeneities. Subsequently, density-dependent flow and transport modelling of multiple salt sources was successfully applied to simulate the formation of the saltwater intrusion during the last 4200 years, accounting for historic changes in the hydraulic...

  13. Obtaining high-resolution stage forecasts by coupling large-scale hydrologic models with sensor data

    Science.gov (United States)

    Fries, K. J.; Kerkez, B.

    2017-12-01

    We investigate how "big" quantities of distributed sensor data can be coupled with a large-scale hydrologic model, in particular the National Water Model (NWM), to obtain hyper-resolution forecasts. The recent launch of the NWM provides a great example of how growing computational capacity is enabling a new generation of massive hydrologic models. While the NWM spans an unprecedented spatial extent, there remain many questions about how to improve forecast at the street-level, the resolution at which many stakeholders make critical decisions. Further, the NWM runs on supercomputers, so water managers who may have access to their own high-resolution measurements may not readily be able to assimilate them into the model. To that end, we ask the question: how can the advances of the large-scale NWM be coupled with new local observations to enable hyper-resolution hydrologic forecasts? A methodology is proposed whereby the flow forecasts of the NWM are directly mapped to high-resolution stream levels using Dynamical System Identification. We apply the methodology across a sensor network of 182 gages in Iowa. Of these sites, approximately one third have shown to perform well in high-resolution flood forecasting when coupled with the outputs of the NWM. The quality of these forecasts is characterized using Principal Component Analysis and Random Forests to identify where the NWM may benefit from new sources of local observations. We also discuss how this approach can help municipalities identify where they should place low-cost sensors to most benefit from flood forecasts of the NWM.

  14. Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud

    Science.gov (United States)

    Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.

    2014-12-01

    In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to

  15. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...... Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables...

  16. Computational framework for modeling the dynamic evolution of large-scale multi-agent organizations

    Science.gov (United States)

    Lazar, Alina; Reynolds, Robert G.

    2002-07-01

    A multi-agent system model of the origins of an archaic state is developed. Agent interaction is mediated by a collection of rules. The rules are mined from a related large-scale data base using two different techniques. One technique uses decision trees while the other uses rough sets. The latter was used since the data collection techniques were associated with a certain degree of uncertainty. The generation of the rough set rules was guided by Genetic Algorithms. Since the rules mediate agent interaction, the rule set with fewer rules and conditionals to check will make scaling up the simulation easier to do. The results suggest that explicitly dealing with uncertainty in rule formation can produce simpler rules than ignoring that uncertainty in situations where uncertainty is a factor in the measurement process.

  17. LARGE SCALE DISTRIBUTED PARAMETER MODEL OF MAIN MAGNET SYSTEM AND FREQUENCY DECOMPOSITION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    ZHANG,W.; MARNERIS, I.; SANDBERG, J.

    2007-06-25

    Large accelerator main magnet system consists of hundreds, even thousands, of dipole magnets. They are linked together under selected configurations to provide highly uniform dipole fields when powered. Distributed capacitance, insulation resistance, coil resistance, magnet inductance, and coupling inductance of upper and lower pancakes make each magnet a complex network. When all dipole magnets are chained together in a circle, they become a coupled pair of very high order complex ladder networks. In this study, a network of more than thousand inductive, capacitive or resistive elements are used to model an actual system. The circuit is a large-scale network. Its equivalent polynomial form has several hundred degrees. Analysis of this high order circuit and simulation of the response of any or all components is often computationally infeasible. We present methods to use frequency decomposition approach to effectively simulate and analyze magnet configuration and power supply topologies.

  18. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    Science.gov (United States)

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  19. A Statistical Model for Hourly Large-Scale Wind and Photovoltaic Generation in New Locations

    DEFF Research Database (Denmark)

    Ekstrom, Jussi; Koivisto, Matti Juhani; Mellin, Ilkka

    2017-01-01

    The analysis of large-scale wind and photovoltaic (PV) energy generation is of vital importance in power systems where their penetration is high. This paper presents a modular methodology to assess the power generation and volatility of a system consisting of both PV plants (PVPs) and wind power...... through distances between the locations, which allows the methodology to be used to assess scenarios with PVPs and WPPs in multiple locations without actual measurement data. The methodology can be applied by the transmission and distribution system operators when analysing the effects and feasibility...... of new PVPs and WPPs in system planning. The model is verified against hourly measured wind speed and solar irradiance data from Finland. A case study assessing the impact of the geographical distribution of the PVPs and WPPs on aggregate power generation and its variability is presented....

  20. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  1. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    Science.gov (United States)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  2. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  3. Test of large-scale specimens and models as applied to NPP equipment materials

    International Nuclear Information System (INIS)

    Timofeev, B.T.; Karzov, G.P.

    1993-01-01

    The paper presents the test results on low-cycle fatigue, crack growth rate and fracture toughness of large-scale specimens and structures, manufactured from steel, widely applied in power engineering industry and used for the production of NPP equipment with VVER-440 and VVER-1000 reactors. The obtained results are compared with available test results of standard specimens and calculation relations, accepted in open-quotes Calculation Norms on Strength.close quotes At the fatigue crack initiation stage the experiments were performed on large-scale specimens of various geometry and configuration, which permitted to define 15X2MFA steel fracture initiation resistance by elastic-plastic deformation of large material volume by homogeneous and inhomogeneous state. Besides the above mentioned specimen tests in the regime of low-cycle loading, the test of models with nozzles were performed and a good correlation of the results on fatigue crack initiation criterium was obtained both with calculated data and standard low-cycle fatigue tests. It was noted that on the Paris part of the fatigue fracture diagram a specimen thickness increase does not influence fatigue crack growth resistance by tests in air both at 20 and 350 degrees C. The estimation of the comparability of the results, obtained on specimens and models was also carried out for this stage of fracture. At the stage of unstable crack growth by static loading the experiments were conducted on specimens of various thickness for 15X2MFA and 15X2NMFA steels and their welded joints, produced by submerged arc welding, in as-produced state (the beginning of service) and after embrittling heat treatment, simulating neutron fluence attack (the end of service). The obtained results give evidence of the possibility of the reliable prediction of structure elements brittle fracture using fracture toughness test results on relatively small standard specimens. 35 refs., 23 figs

  4. Models of large-scale magnetic fields in stellar interiors. Application to solar and ap stars

    International Nuclear Information System (INIS)

    Duez, Vincent

    2009-01-01

    Stellar astrophysics needs today new models of large-scale magnetic fields, which are observed through spectropolarimetry at the surface of Ap/Bp stars, and thought to be an explanation for the uniform rotation of the solar radiation zone, deduced from helio seismic inversions. During my PhD, I focused on describing the possible magnetic equilibria in stellar interiors. The found configurations are mixed poloidal-toroidal, and minimize the energy for a given helicity, in analogy with Taylor states encountered in spheromaks. Taking into account the self-gravity leads us to the 'non force-free' equilibria family, that will thus influence the stellar structure. I derived all the physical quantities associated with the magnetic field; then I evaluated the perturbations they induce on gravity, thermodynamic quantities as well as energetic ones, for a solar model and an Ap star. 3D MHD simulations allowed me to show that these equilibria form a first stable states family, the generalization of such states remaining an open question. It has been shown that a large-scale magnetic field confined in the solar radiation zone can induce an oblateness comparable to a high core rotation law. I also studied the secular interaction between the magnetic field, the differential rotation and the meridional circulation in the aim of implementing their effects in a next generation stellar evolution code. The influence of the magnetism on convection has also been studied. Finally, hydrodynamic processes responsible for the mixing have been compared with diffusion and a change of convection's efficiency in the case of a CoRoT star target. (author) [fr

  5. Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement

    Science.gov (United States)

    Zheng, Xiaohui

    2009-01-01

    The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…

  6. Coordinated reset stimulation in a large-scale model of the STN-GPe circuit

    Directory of Open Access Journals (Sweden)

    Martin eEbert

    2014-11-01

    Full Text Available Synchronization of populations of neurons is a hallmark of several brain diseases. Coordinated reset (CR stimulation is a model-based stimulation technique which specifically counteracts abnormal synchrony by desynchronization. Electrical CR stimulation, e.g. for the treatment of Parkinson’s disease (PD, is administered via depth electrodes. In order to get a deeper understanding of this technique, we extended the top-down approach of previous studies and constructed a large-scale computational model of the respective brain areas. Furthermore, we took into account the spatial anatomical properties of the simulated brain structures and incor- porated a detailed numerical representation of 2·104 simulated neurons. We simulated the subthalamic nucleus (STN and the globus pallidus externus (GPe. Connections within the STN were governed by spike-timing dependent plasticity (STDP. In this way, we modeled the physiological and pathological activity of the considered brain structures. In particular, we investigated how plasticity could be exploited and how the model could be shifted from strongly synchronized (pathological activity to strongly desynchronized (healthy activity of the neuronal populations via CR stimulation of the STN neurons. Furthermore, we investigated the impact of specific stimulation parameters especially the electrode position on the stimulation outcome. Our model provides a step forward towards a biophysically realistic model of the brain areas relevant to the emergence of pathological neuronal activity in PD. Furthermore, our model constitutes a test bench for the optimization of both stimulation parameters and novel electrode geometries for efficient CR stimulation.

  7. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  8. Using a Core Scientific Metadata Model in Large-Scale Facilities

    Directory of Open Access Journals (Sweden)

    Brian Matthews

    2010-07-01

    Full Text Available In this paper, we present the Core Scientific Metadata Model (CSMD, a model for the representation of scientific study metadata developed within the Science & Technology Facilities Council (STFC to represent the data generated from scientific facilities. The model has been developed to allow management of and access to the data resources of the facilities in a uniform way, although we believe that the model has wider application, especially in areas of “structural science” such as chemistry, materials science and earth sciences. We give some motivations behind the development of the model, and an overview of its major structural elements, centred on the notion of a scientific study formed by a collection of specific investigations. We give some details of the model, with the description of each investigation associated with a particular experiment on a sample generating data, and the associated data holdings are then mapped to the investigation with the appropriate parameters. We then go on to discuss the instantiation of the metadata model within a production quality data management infrastructure, the Information CATalogue (ICAT, which has been developed within STFC for use in large-scale photon and neutron sources. Finally, we give an overview of the relationship between CSMD, and other initiatives, and give some directions for future developments.    

  9. Imaging the Chicxulub central crater zone from large scale seismic acoustic wave propagation and gravity modeling

    Science.gov (United States)

    Fucugauchi, J. U.; Ortiz-Aleman, C.; Martin, R.

    2017-12-01

    Large complex craters are characterized by central uplifts that represent large-scale differential movement of deep basement from the transient cavity. Here we investigate the central sector of the large multiring Chicxulub crater, which has been surveyed by an array of marine, aerial and land-borne geophysical methods. Despite high contrasts in physical properties,contrasting results for the central uplift have been obtained, with seismic reflection surveys showing lack of resolution in the central zone. We develop an integrated seismic and gravity model for the main structural elements, imaging the central basement uplift and melt and breccia units. The 3-D velocity model built from interpolation of seismic data is validated using perfectly matched layer seismic acoustic wave propagation modeling, optimized at grazing incidence using shift in the frequency domain. Modeling shows significant lack of illumination in the central sector, masking presence of the central uplift. Seismic energy remains trapped in an upper low velocity zone corresponding to the sedimentary infill, melt/breccias and surrounding faulted blocks. After conversion of seismic velocities into a volume of density values, we use massive parallel forward gravity modeling to constrain the size and shape of the central uplift that lies at 4.5 km depth, providing a high-resolution image of crater structure.The Bouguer anomaly and gravity response of modeled units show asymmetries, corresponding to the crater structure and distribution of post-impact carbonates, breccias, melt and target sediments

  10. Structure-preserving model reduction of large-scale logistics networks. Applications for supply chains

    Science.gov (United States)

    Scholz-Reiter, B.; Wirth, F.; Dashkovskiy, S.; Makuschewitz, T.; Schönlein, M.; Kosmykov, M.

    2011-12-01

    We investigate the problem of model reduction with a view to large-scale logistics networks, specifically supply chains. Such networks are modeled by means of graphs, which describe the structure of material flow. An aim of the proposed model reduction procedure is to preserve important features within the network. As a new methodology we introduce the LogRank as a measure for the importance of locations, which is based on the structure of the flows within the network. We argue that these properties reflect relative importance of locations. Based on the LogRank we identify subgraphs of the network that can be neglected or aggregated. The effect of this is discussed for a few motifs. Using this approach we present a meta algorithm for structure-preserving model reduction that can be adapted to different mathematical modeling frameworks. The capabilities of the approach are demonstrated with a test case, where a logistics network is modeled as a Jackson network, i.e., a particular type of queueing network.

  11. Zone modelling of the thermal performances of a large-scale bloom reheating furnace

    International Nuclear Information System (INIS)

    Tan, Chee-Keong; Jenkins, Joana; Ward, John; Broughton, Jonathan; Heeley, Andy

    2013-01-01

    This paper describes the development and comparison of a two- (2D) and three-dimensional (3D) mathematical models, based on the zone method of radiation analysis, to simulate the thermal performances of a large bloom reheating furnace. The modelling approach adopted in the current paper differs from previous work since it takes into account the net radiation interchanges between the top and bottom firing sections of the furnace and also allows for enthalpy exchange due to the flows of combustion products between these sections. The models were initially validated at two different furnace throughput rates using experimental and plant's model data supplied by Tata Steel. The results to-date demonstrated that the model predictions are in good agreement with measured heating profiles of the blooms encountered in the actual furnace. It was also found no significant differences between the predictions from the 2D and 3D models. Following the validation, the 2D model was then used to assess the impact of the furnace responses to changing throughput rate. It was found that the potential furnace response to changing throughput rate influences the settling time of the furnace to the next steady state operation. Overall the current work demonstrates the feasibility and practicality of zone modelling and its potential for incorporation into a model based furnace control system. - Highlights: ► 2D and 3D zone models of large-scale bloom reheating furnace. ► The models were validated with experimental and plant model data. ► Examine the transient furnace response to changing the furnace throughput rates. ► No significant differences found between the predictions from the 2D and 3D models.

  12. Development of a self-consistent lightning NOx simulation in large-scale 3-D models

    Science.gov (United States)

    Luo, Chao; Wang, Yuhang; Koshak, William J.

    2017-03-01

    We seek to develop a self-consistent representation of lightning NOx (LNOx) simulation in a large-scale 3-D model. Lightning flash rates are parameterized functions of meteorological variables related to convection. We examine a suite of such variables and find that convective available potential energy and cloud top height give the best estimates compared to July 2010 observations from ground-based lightning observation networks. Previous models often use lightning NOx vertical profiles derived from cloud-resolving model simulations. An implicit assumption of such an approach is that the postconvection lightning NOx vertical distribution is the same for all deep convection, regardless of geographic location, time of year, or meteorological environment. Detailed observations of the lightning channel segment altitude distribution derived from the NASA Lightning Nitrogen Oxides Model can be used to obtain the LNOx emission profile. Coupling such a profile with model convective transport leads to a more self-consistent lightning distribution compared to using prescribed postconvection profiles. We find that convective redistribution appears to be a more important factor than preconvection LNOx profile selection, providing another reason for linking the strength of convective transport to LNOx distribution.

  13. Improvement of large scale solar installation model for ground current analysis

    International Nuclear Information System (INIS)

    Garcia-Gracia, M.; El Halabi, N.; Khodr, H.M.; Sanz, Jose Fco

    2010-01-01

    Application of a simplified PV model to large-scale PV installations neglects the current distortion, potential rise and losses in the system as consequence of the capacitive coupling inside the dc electric circuit. These capacitive couplings represent a leakage impedance loop for the capacitive currents imposed by the high frequency switching performance of power converters. This paper proposes a suitable method to reproduce these harmonic currents injected not only into the grid, but also into the dc circuit of the PV installation. The capacitive coupling proposed of PV modules with ground is modeled as a parallel resistance and capacitor arrangement which leads to an accurate approximation to the real operation response of the PV installation. Results obtained are compared with those of simplified models of PV installations used in literature. An experimental validation of the proposed model was performed with field measurements obtained from an existing 1 MW PV installation. Simulation results are presented together with solutions based on the proposed model to minimize the capacitive ground current in this PV installation for meeting typical power quality regulations concerning to the harmonic distortion and safety conditions and to optimize the efficiency of the installation.

  14. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist.

    Science.gov (United States)

    Latif, Quresh S; Saab, Victoria A; Dudley, Jonathan G; Hollenbeck, Jeff P

    2013-11-01

    managers attempting to balance salvage logging with habitat conservation in burned-forest landscapes where black-backed woodpecker nest location data are not immediately available. Ensemble modeling represents a promising tool for guiding conservation of large-scale disturbance specialists.

  15. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    Energy Technology Data Exchange (ETDEWEB)

    Reyes, Luz M., E-mail: luzmarinareyes@gmail.com [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Moreno, Claudia, E-mail: claudia.moreno@cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Madriz Aguilar, Jose Edgar, E-mail: edgar.madriz@red.cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Bellini, Mauricio, E-mail: mbellini@mdp.edu.ar [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata (UNMdP), Funes 3350, C.P. 7600, Mar del Plata (Argentina); Instituto de Investigaciones Fisicas de Mar del Plata (IFIMAR) - Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)

    2012-10-22

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  16. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    Science.gov (United States)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  17. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    International Nuclear Information System (INIS)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-01-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  18. Revisiting the EC/CMB model for extragalactic large scale jets

    Science.gov (United States)

    Lucchini, M.; Tavecchio, F.; Ghisellini, G.

    2017-04-01

    One of the most outstanding results of the Chandra X-ray Observatory was the discovery that AGN jets are bright X-ray emitters on very large scales, up to hundreds of kpc. Of these, the powerful and beamed jets of flat-spectrum radio quasars are particularly interesting, as the X-ray emission cannot be explained by an extrapolation of the lower frequency synchrotron spectrum. Instead, the most common model invokes inverse Compton scattering of photons of the cosmic microwave background (EC/CMB) as the mechanism responsible for the high-energy emission. The EC/CMB model has recently come under criticism, particularly because it should predict a significant steady flux in the MeV-GeV band which has not been detected by the Fermi/LAT telescope for two of the best studied jets (PKS 0637-752 and 3C273). In this work, we revisit some aspects of the EC/CMB model and show that electron cooling plays an important part in shaping the spectrum. This can solve the overproduction of γ-rays by suppressing the high-energy end of the emitting particle population. Furthermore, we show that cooling in the EC/CMB model predicts a new class of extended jets that are bright in X-rays but silent in the radio and optical bands. These jets are more likely to lie at intermediate redshifts and would have been missed in all previous X-ray surveys due to selection effects.

  19. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates

    Science.gov (United States)

    Barberis, Lucas; Peruani, Fernando

    2016-12-01

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  20. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  1. Statistical Modeling of Large-Scale Signal Path Loss in Underwater Acoustic Networks

    Directory of Open Access Journals (Sweden)

    Manuel Perez Malumbres

    2013-02-01

    Full Text Available In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation, we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc., an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc..

  2. Model design for Large-Scale Seismic Test Program at Hualien, Taiwan

    International Nuclear Information System (INIS)

    Tang, H.T.; Graves, H.L.; Chen, P.C.

    1991-01-01

    The Large-Scale Seismic Test (LSST) Program at Hualien, Taiwan, is a follow-on to the soil-structure interaction (SSI) experiments at Lotung, Taiwan. The planned SSI studies will be performed at a stiff soil site in Hualien, Taiwan, that historically has had slightly more destructive earthquakes in the past than Lotung. The LSST is a joint effort among many interested parties. Electric Power Research Institute (EPRI) and Taipower are the organizers of the program and have the lead in planning and managing the program. Other organizations participating in the LSST program are US Nuclear Regulatory Commission (NRC), the Central Research Institute of Electric Power Industry (CRIEPI), the Tokyo Electric Power Company (TEPCO), the Commissariat A L'Energie Atomique (CEA), Electricite de France (EdF) and Framatome. The LSST was initiated in January 1990, and is envisioned to be five years in duration. Based on the assumption of stiff soil and confirmed by soil boring and geophysical results the test model was designed to provide data needed for SSI studies covering: free-field input, nonlinear soil response, non-rigid body SSI, torsional response, kinematic interaction, spatial incoherency and other effects. Taipower had the lead in design of the test model and received significant input from other LSST members. Questions raised by LSST members were on embedment effects, model stiffness, base shear, and openings for equipment. This paper describes progress in site preparation, design and construction of the model and development of an instrumentation plan

  3. Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling

    Science.gov (United States)

    Her, Y. G.

    2017-12-01

    Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological

  4. A large-scale stochastic spatiotemporal model for Aedes albopictus-borne chikungunya epidemiology

    Science.gov (United States)

    Chandra, Nastassya L.; Proestos, Yiannis; Lelieveld, Jos; Christophides, George K.; Parham, Paul E.

    2017-01-01

    Chikungunya is a viral disease transmitted to humans primarily via the bites of infected Aedes mosquitoes. The virus caused a major epidemic in the Indian Ocean in 2004, affecting millions of inhabitants, while cases have also been observed in Europe since 2007. We developed a stochastic spatiotemporal model of Aedes albopictus-borne chikungunya transmission based on our recently developed environmentally-driven vector population dynamics model. We designed an integrated modelling framework incorporating large-scale gridded climate datasets to investigate disease outbreaks on Reunion Island and in Italy. We performed Bayesian parameter inference on the surveillance data, and investigated the validity and applicability of the underlying biological assumptions. The model successfully represents the outbreak and measures of containment in Italy, suggesting wider applicability in Europe. In its current configuration, the model implies two different viral strains, thus two different outbreaks, for the two-stage Reunion Island epidemic. Characterisation of the posterior distributions indicates a possible relationship between the second larger outbreak on Reunion Island and the Italian outbreak. The model suggests that vector control measures, with different modes of operation, are most effective when applied in combination: adult vector intervention has a high impact but is short-lived, larval intervention has a low impact but is long-lasting, and quarantining infected territories, if applied strictly, is effective in preventing large epidemics. We present a novel approach in analysing chikungunya outbreaks globally using a single environmentally-driven mathematical model. Our study represents a significant step towards developing a globally applicable Ae. albopictus-borne chikungunya transmission model, and introduces a guideline for extending such models to other vector-borne diseases. PMID:28362820

  5. Large-scale model of flow in heterogeneous and hierarchical porous media

    Science.gov (United States)

    Chabanon, Morgan; Valdés-Parada, Francisco J.; Ochoa-Tapia, J. Alberto; Goyeau, Benoît

    2017-11-01

    Heterogeneous porous structures are very often encountered in natural environments, bioremediation processes among many others. Reliable models for momentum transport are crucial whenever mass transport or convective heat occurs in these systems. In this work, we derive a large-scale average model for incompressible single-phase flow in heterogeneous and hierarchical soil porous media composed of two distinct porous regions embedding a solid impermeable structure. The model, based on the local mechanical equilibrium assumption between the porous regions, results in a unique momentum transport equation where the global effective permeability naturally depends on the permeabilities at the intermediate mesoscopic scales and therefore includes the complex hierarchical structure of the soil. The associated closure problem is numerically solved for various configurations and properties of the heterogeneous medium. The results clearly show that the effective permeability increases with the volume fraction of the most permeable porous region. It is also shown that the effective permeability is sensitive to the dimensionality spatial arrangement of the porous regions and in particular depends on the contact between the impermeable solid and the two porous regions.

  6. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    Science.gov (United States)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  7. Generation of large-scale digital elevation models via synthetic aperture radar interferometry

    Energy Technology Data Exchange (ETDEWEB)

    Fornaro, G.; Lanari, R.; Sansosti, E. [Consiglio Nazionale delle Ricerche, Istituto di Ricerca per l' Elettromagnetismo ed i componenti elettronici, Naples (Italy); Tesauro, M.; Franceschetti, G. [Naples Univ. Federico 2., Naples (Italy). Dipt. di Ingegneria Elettronica e delle Telecomunicazioni

    2001-02-01

    It is investigated the possibility to generate a large-scale Digital Elevation Model by applying the Synthetic Aperture Radar interferometry technique and using tandem data acquired by the ERS-1/ERS-2 sensors. The presented study is mainly focused on the phase unwrapping step that represents the most critical point of the overall processing chain. In particular, it is concentrated on the unwrapping problems related to the use of a large ERS tandem data set that, in order to be unwrapped, must be partitioned. The paper discusses the inclusion of external information (even rough) of the scene topography, the application of a region growing unwrapping technique and the insertion of possible constraints on the phase to be retrieved in order to minimize the global unwrapping errors. The main goal is the generation of a digital elevation model relative to an area of 300 km by 100 km located in the southern part of Italy. Comparisons between the achieved result and a precise digital terrain model, relative to a smaller area, are also included.

  8. Modeling and simulation of microalgae derived hydrogen production in compact large scale photobioreactors

    Science.gov (United States)

    Vargas, Jose; Dias, Fernando; Mariano, Andre; Balmant, Wellington; Rosa, Marcos; Savi, Daiani; Kava, Vanessa; Glienke, Chirlei; Ordonez, Juan

    This study predicts microalgae derived hydrogen production in compact large scale photobioreactors (PBR). A transient mathematical model for the cultivation medium is developed. The tool determines the whole system temperature, and mass fractions distribution. A mathematical correlation is proposed to calculate the resulting effect on H2 production rate after genetically modifying the microalgae species. An indigenous microalgae strain was selected to be robust under different weather conditions. This strain was identified through rDNA sequence analysis, including ITS1, 5.8S and ITS2 (Internal Transcribed Spacer). The ITS analysis showed no genetic divergence between the utilized strain and Acutodesmus obliquus. A coarse mesh was used (6048 volume elements) to obtain results for a large compact PBR (2m x 5m x 8m). The largest computational time required for obtaining results was 560 s. The numerical results for the wild species microalgal growth are validated by direct comparison to experiments. Tests were conducted in the laboratory to assess H2 production model numerical results, which are in good qualitative agreement with measurements. Therefore, the model could be used as an efficient tool for H2 production PBR systems design and control. CNPq.

  9. Large-scale functional models of visual cortex for remote sensing

    Energy Technology Data Exchange (ETDEWEB)

    Brumby, Steven P [Los Alamos National Laboratory; Kenyon, Garrett [Los Alamos National Laboratory; Rasmussen, Craig E [Los Alamos National Laboratory; Swaminarayan, Sriram [Los Alamos National Laboratory; Bettencourt, Luis [Los Alamos National Laboratory; Landecker, Will [PORTLAND STATE UNIV.

    2009-01-01

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.

  10. Real-World-Time Simulation of Memory Consolidation in a Large-Scale Cerebellar Model.

    Science.gov (United States)

    Gosui, Masato; Yamazaki, Tadashi

    2016-01-01

    We report development of a large-scale spiking network model of the cerebellum composed of more than 1 million neurons. The model is implemented on graphics processing units (GPUs), which are dedicated hardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation of cerebellar activity for 1 s completes within 1 s in the real-world time, with temporal resolution of 1 ms. This allows us to carry out a very long-term computer simulation of cerebellar activity in a practical time with millisecond temporal resolution. Using the model, we carry out computer simulation of long-term gain adaptation of optokinetic response (OKR) eye movements for 5 days aimed to study the neural mechanisms of posttraining memory consolidation. The simulation results are consistent with animal experiments and our theory of posttraining memory consolidation. These results suggest that realtime computing provides a useful means to study a very slow neural process such as memory consolidation in the brain.

  11. Prospective large-scale field study generates predictive model identifying major contributors to colony losses.

    Directory of Open Access Journals (Sweden)

    Merav Gleit Kielmanowicz

    2015-04-01

    Full Text Available Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees. The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health.

  12. Prospective Large-Scale Field Study Generates Predictive Model Identifying Major Contributors to Colony Losses

    Science.gov (United States)

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J. R.; Ballam, Joan M.

    2015-01-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health. PMID:25875764

  13. Assessing Human Modifications to Floodplains using Large-Scale Hydrogeomorphic Floodplain Modeling

    Science.gov (United States)

    Morrison, R. R.; Scheel, K.; Nardi, F.; Annis, A.

    2017-12-01

    Human modifications to floodplains for water resource and flood management purposes have significantly transformed river-floodplain connectivity dynamics in many watersheds. Bridges, levees, reservoirs, shifts in land use, and other hydraulic engineering works have altered flow patterns and caused changes in the timing and extent of floodplain inundation processes. These hydrogeomorphic changes have likely resulted in negative impacts to aquatic habitat and ecological processes. The availability of large-scale topographic datasets at high resolution provide an opportunity for detecting anthropogenic impacts by means of geomorphic mapping. We have developed and are implementing a methodology for comparing a hydrogeomorphic floodplain mapping technique to hydraulically-modeled floodplain boundaries to estimate floodplain loss due to human activities. Our hydrogeomorphic mapping methodology assumes that river valley morphology intrinsically includes information on flood-driven erosion and depositional phenomena. We use a digital elevation model-based algorithm to identify the floodplain as the area of the fluvial corridor laying below water reference levels, which are estimated using a simplified hydrologic model. Results from our hydrogeomorphic method are compared to hydraulically-derived flood zone maps and spatial datasets of levee protected-areas to explore where water management features, such as levees, have changed floodplain dynamics and landscape features. Parameters associated with commonly used F-index functions are quantified and analyzed to better understand how floodplain areas have been reduced within a basin. Preliminary results indicate that the hydrogeomorphic floodplain model is useful for quickly delineating floodplains at large watershed scales, but further analyses are needed to understand the caveats for using the model in determining floodplain loss due to levees. We plan to continue this work by exploring the spatial dependencies of the F

  14. Odor Experience Facilitates Sparse Representations of New Odors in a Large-Scale Olfactory Bulb Model

    Science.gov (United States)

    Zhou, Shanglin; Migliore, Michele; Yu, Yuguo

    2016-01-01

    Prior odor experience has a profound effect on the coding of new odor inputs by animals. The olfactory bulb, the first relay of the olfactory pathway, can substantially shape the representations of odor inputs. How prior odor experience affects the representation of new odor inputs in olfactory bulb and its underlying network mechanism are still unclear. Here we carried out a series of simulations based on a large-scale realistic mitral-granule network model and found that prior odor experience not only accelerated formation of the network, but it also significantly strengthened sparse responses in the mitral cell network while decreasing sparse responses in the granule cell network. This modulation of sparse representations may be due to the increase of inhibitory synaptic weights. Correlations among mitral cells within the network and correlations between mitral network responses to different odors decreased gradually when the number of prior training odors was increased, resulting in a greater decorrelation of the bulb representations of input odors. Based on these findings, we conclude that the degree of prior odor experience facilitates degrees of sparse representations of new odors by the mitral cell network through experience-enhanced inhibition mechanism. PMID:26903819

  15. A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling

    Directory of Open Access Journals (Sweden)

    Roger V Hoang

    2013-10-01

    Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.

  16. Repurposing of open data through large scale hydrological modelling - hypeweb.smhi.se

    Science.gov (United States)

    Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit

    2015-04-01

    Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). This presentation will give an example of how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. The model output is launched as new Open Data at the web site www.hypeweb.smhi.se to be used for (i) Climate change impact assessments on water resources and dynamics; (ii) The European Water Framework Directive (WFD) for characterization and development of measure programs to improve the ecological status of water bodies; (iii) Design variables for infrastructure constructions; (iv) Spatial water-resource mapping; (v) Operational forecasts (1-10 days and seasonal) on floods and droughts; (vi) Input to oceanographic models for operational forecasts and marine status assessments; (vii) Research. The following regional domains have been modelled so far with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The Hype web site provides several interactive web applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model

  17. Linking genes to ecosystem trace gas fluxes in a large-scale model system

    Science.gov (United States)

    Meredith, L. K.; Cueva, A.; Volkmann, T. H. M.; Sengupta, A.; Troch, P. A.

    2017-12-01

    Soil microorganisms mediate biogeochemical cycles through biosphere-atmosphere gas exchange with significant impact on atmospheric trace gas composition. Improving process-based understanding of these microbial populations and linking their genomic potential to the ecosystem-scale is a challenge, particularly in soil systems, which are heterogeneous in biodiversity, chemistry, and structure. In oligotrophic systems, such as the Landscape Evolution Observatory (LEO) at Biosphere 2, atmospheric trace gas scavenging may supply critical metabolic needs to microbial communities, thereby promoting tight linkages between microbial genomics and trace gas utilization. This large-scale model system of three initially homogenous and highly instrumented hillslopes facilitates high temporal resolution characterization of subsurface trace gas fluxes at hundreds of sampling points, making LEO an ideal location to study microbe-mediated trace gas fluxes from the gene to ecosystem scales. Specifically, we focus on the metabolism of ubiquitous atmospheric reduced trace gases hydrogen (H2), carbon monoxide (CO), and methane (CH4), which may have wide-reaching impacts on microbial community establishment, survival, and function. Additionally, microbial activity on LEO may facilitate weathering of the basalt matrix, which can be studied with trace gas measurements of carbonyl sulfide (COS/OCS) and carbon dioxide (O-isotopes in CO2), and presents an additional opportunity for gene to ecosystem study. This work will present initial measurements of this suite of trace gases to characterize soil microbial metabolic activity, as well as links between spatial and temporal variability of microbe-mediated trace gas fluxes in LEO and their relation to genomic-based characterization of microbial community structure (phylogenetic amplicons) and genetic potential (metagenomics). Results from the LEO model system will help build understanding of the importance of atmospheric inputs to

  18. A model comparison study of large-scale mantle lithosphere dynamics driven by subduction

    Science.gov (United States)

    OzBench, Mark; Regenauer-Lieb, Klaus; Stegman, Dave R.; Morra, Gabriele; Farrington, Rebecca; Hale, Alina; May, Dave A.; Freeman, Justin; Bourgouin, Laurent; Mühlhaus, Hans; Moresi, Louis

    2008-12-01

    Modelling subduction involves solving the dynamic interaction between a rigid (solid yet deformable) plate and the fluid (easily deformable) mantle. Previous approaches neglected the solid-like behavior of the lithosphere by only considering a purely fluid description. However, over the past 5 years, a more self-consistent description of a mechanically differentiated subducting plate has emerged. The key feature in this mechanical description is incorporation of a strong core which provides small resistance to plate bending at subduction zones while simultaneously providing adequate stretching resistance such that slab pull drives forward plate motion. Additionally, the accompanying numerical approaches for simulating large-scale lithospheric deformation processes coupled to the underlying viscous mantle flow, have been become available. Here we put forward three fundamentally different numerical strategies, each of which is capabable of treating the advection of mechanically distinct materials that describe the subducting plate. We demonstrate their robustness by calculating the numerically challenging problem of subduction of a 6000 km wide slab at high-resolution in three-dimensions, the successfuly achievement of which only a few codes in the world can presently even attempt. In spite of the differences of the approaches, all three codes pass the simple qualitative test of developing an "S-bend" trench curvature previously observed in similar models. While reproducing this emergent feature validates that the lithosphere-mantle interaction has been correctly modelled, this is not a numerical benchmark in the traditional sense where the objective is for all codes to achieve exact agreement on a unique numerical solution. However, we do provide some quantitative comparisons such as trench and plate kinematics in addition to discussing the strength and weaknesses of the individual approaches. Consequently, we believe these developed algorithms can now be applied to

  19. Large-scale collection and annotation of gene models for date palm (Phoenix dactylifera, L.).

    Science.gov (United States)

    Zhang, Guangyu; Pan, Linlin; Yin, Yuxin; Liu, Wanfei; Huang, Dawei; Zhang, Tongwu; Wang, Lei; Xin, Chengqi; Lin, Qiang; Sun, Gaoyuan; Ba Abdullah, Mohammed M; Zhang, Xiaowei; Hu, Songnian; Al-Mssallem, Ibrahim S; Yu, Jun

    2012-08-01

    The date palm (Phoenix dactylifera L.), famed for its sugar-rich fruits (dates) and cultivated by humans since 4,000 B.C., is an economically important crop in the Middle East, Northern Africa, and increasingly other places where climates are suitable. Despite a long history of human cultivation, the understanding of P. dactylifera genetics and molecular biology are rather limited, hindered by lack of basic data in high quality from genomics and transcriptomics. Here we report a large-scale effort in generating gene models (assembled expressed sequence tags or ESTs and mapped to a genome assembly) for P. dactylifera, using the long-read pyrosequencing platform (Roche/454 GS FLX Titanium) in high coverage. We built fourteen cDNA libraries from different P. dactylifera tissues (cultivar Khalas) and acquired 15,778,993 raw sequencing reads-about one million sequencing reads per library-and the pooled sequences were assembled into 67,651 non-redundant contigs and 301,978 singletons. We annotated 52,725 contigs based on the plant databases and 45 contigs based on functional domains referencing to the Pfam database. From the annotated contigs, we assigned GO (Gene Ontology) terms to 36,086 contigs and KEGG pathways to 7,032 contigs. Our comparative analysis showed that 70.6 % (47,930), 69.4 % (47,089), 68.4 % (46,441), and 69.3 % (47,048) of the P. dactylifera gene models are shared with rice, sorghum, Arabidopsis, and grapevine, respectively. We also assigned our gene models into house-keeping and tissue-specific genes based on their tissue specificity.

  20. Large-scale Models Reveal the Two-component Mechanics of Striated Muscle

    Directory of Open Access Journals (Sweden)

    Robert Jarosch

    2008-12-01

    Full Text Available This paper provides a comprehensive explanation of striated muscle mechanics and contraction on the basis of filament rotations. Helical proteins, particularly the coiled-coils of tropomyosin, myosin and α-actinin, shorten their H-bonds cooperatively and produce torque and filament rotations when the Coulombic net-charge repulsion of their highly charged side-chains is diminished by interaction with ions. The classical “two-component model” of active muscle differentiated a “contractile component” which stretches the “series elastic component” during force production. The contractile components are the helically shaped thin filaments of muscle that shorten the sarcomeres by clockwise drilling into the myosin cross-bridges with torque decrease (= force-deficit. Muscle stretch means drawing out the thin filament helices off the cross-bridges under passive counterclockwise rotation with torque increase (= stretch activation. Since each thin filament is anchored by four elastic α-actinin Z-filaments (provided with forceregulating sites for Ca2+ binding, the thin filament rotations change the torsional twist of the four Z-filaments as the “series elastic components”. Large scale models simulate the changes of structure and force in the Z-band by the different Z-filament twisting stages A, B, C, D, E, F and G. Stage D corresponds to the isometric state. The basic phenomena of muscle physiology, i. e. latency relaxation, Fenn-effect, the force-velocity relation, the length-tension relation, unexplained energy, shortening heat, the Huxley-Simmons phases, etc. are explained and interpreted with the help of the model experiments.

  1. Identification of water quality degradation hotspots in developing countries by applying large scale water quality modelling

    Science.gov (United States)

    Malsy, Marcus; Reder, Klara; Flörke, Martina

    2014-05-01

    Decreasing water quality is one of the main global issues which poses risks to food security, economy, and public health and is consequently crucial for ensuring environmental sustainability. During the last decades access to clean drinking water increased, but 2.5 billion people still do not have access to basic sanitation, especially in Africa and parts of Asia. In this context not only connection to sewage system is of high importance, but also treatment, as an increasing connection rate will lead to higher loadings and therefore higher pressure on water resources. Furthermore, poor people in developing countries use local surface waters for daily activities, e.g. bathing and washing. It is thus clear that water utilization and water sewerage are indispensable connected. In this study, large scale water quality modelling is used to point out hotspots of water pollution to get an insight on potential environmental impacts, in particular, in regions with a low observation density and data gaps in measured water quality parameters. We applied the global water quality model WorldQual to calculate biological oxygen demand (BOD) loadings from point and diffuse sources, as well as in-stream concentrations. Regional focus in this study is on developing countries i.e. Africa, Asia, and South America, as they are most affected by water pollution. Hereby, model runs were conducted for the year 2010 to draw a picture of recent status of surface waters quality and to figure out hotspots and main causes of pollution. First results show that hotspots mainly occur in highly agglomerated regions where population density is high. Large urban areas are initially loading hotspots and pollution prevention and control become increasingly important as point sources are subject to connection rates and treatment levels. Furthermore, river discharge plays a crucial role due to dilution potential, especially in terms of seasonal variability. Highly varying shares of BOD sources across

  2. Large-scale model-based assessment of deer-vehicle collision risk.

    Directory of Open Access Journals (Sweden)

    Torsten Hothorn

    Full Text Available Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining

  3. Analytical modeling of the statistical properties of the contrast of large-scale irregularities of the ionosphere

    Science.gov (United States)

    Vsekhsviatskaia, I. S.; Evstratova, E. A.; Kalinin, Iu. K.; Romanchuk, A. A.

    1989-08-01

    An analytical model is proposed for the distribution of variations of the relative contrast of the electron density of large-scale ionospheric irregularities. The model is characterized by nonzero asymmetry and excess. It is shown that the model can be applied to horizontal irregularity scales from hundreds to thousands of kilometers.

  4. Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster

    Science.gov (United States)

    2011-01-01

    Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster James F. Kelly and Francis...present performance statistics to explain the scalability behavior. Keywords- atmospheric models , time intergrators, MPI, scal- ability, performance; I...moving toward the nonhy- drostatic regime. The nonhydrostatic atmospheric models , which run at resolutions finer than 10 km, possess fast- moving

  5. Hierarchical formation of large scale structures of the Universe: observations and models

    International Nuclear Information System (INIS)

    Maurogordato, Sophie

    2003-01-01

    In this report for an Accreditation to Supervise Research (HDR), the author proposes an overview of her research works in cosmology. These works notably addressed the large scale distribution of the Universe (with constraints on the scenario of formation, and on the bias relationship, and the structuring of clusters), the analysis of galaxy clusters during coalescence, mass distribution within relaxed clusters [fr

  6. Modeling large scale cohesive sediment transport affected by small scale biological activity

    NARCIS (Netherlands)

    Borsje, Bastiaan Wijnand; de Vries, Mindert; Hulscher, Suzanne J.M.H.; de Boer, Gerben J.

    2008-01-01

    Biological activity on the bottom of the seabed is known to have significant influence on the dynamics of cohesive sediment on a small spatial and temporal scale. In this study, we aim to understand the large-scale effects of small-scale biological activity. Hereto, effects of biology are

  7. Delineating large-scale migratory connectivity of reed warblers using integrated multistate models

    NARCIS (Netherlands)

    Procházka, Petr; Hahn, Steffen; Rolland, Simon; van der Jeugd, Henk; Csörgő, Tibor; Jiguet, Frédéric; Mokwa, Tomasz; Liechti, Felix; Vangeluwe, Didier; Korner-Nievergelt, Fränzi

    2017-01-01

    Aim Assessing the extent of large-scale migratory connectivity is crucial for understanding the evolution of migratory systems and effective species conservation. It has been, however, difficult to elucidate the annual whereabouts of migratory populations of small animals across the annual cycle.

  8. Idealised modelling of storm surges in large-scale coastal basins

    NARCIS (Netherlands)

    Chen, Wenlong

    2015-01-01

    Coastal areas around the world are frequently attacked by various types of storms, threatening human life and property. This study aims to understand storm surge processes in large-scale coastal basins, particularly focusing on the influences of geometry, topography and storm characteristics on the

  9. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    Energy Technology Data Exchange (ETDEWEB)

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally

  10. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    International Nuclear Information System (INIS)

    Schroeder, William J.

    2011-01-01

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem

  11. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    Science.gov (United States)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  12. Studies on combined model based on functional objectives of large scale complex engineering

    Science.gov (United States)

    Yuting, Wang; Jingchun, Feng; Jiabao, Sun

    2018-03-01

    As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.

  13. A mouse model for studying large-scale neuronal networks using EEG mapping techniques.

    Science.gov (United States)

    Mégevand, Pierre; Quairiaux, Charles; Lascano, Agustina M; Kiss, Jozsef Z; Michel, Christoph M

    2008-08-15

    Human functional imaging studies are increasingly focusing on the identification of large-scale neuronal networks, their temporal properties, their development, and their plasticity and recovery after brain lesions. A method targeting large-scale networks in rodents would open the possibility to investigate their neuronal and molecular basis in detail. We here present a method to study such networks in mice with minimal invasiveness, based on the simultaneous recording of epicranial EEG from 32 electrodes regularly distributed over the head surface. Spatiotemporal analysis of the electrical potential maps similar to human EEG imaging studies allows quantifying the dynamics of the global neuronal activation with sub-millisecond resolution. We tested the feasibility, stability and reproducibility of the method by recording the electrical activity evoked by mechanical stimulation of the mystacial vibrissae. We found a series of potential maps with different spatial configurations that suggested the activation of a large-scale network with generators in several somatosensory and motor areas of both hemispheres. The spatiotemporal activation pattern was stable both across mice and in the same mouse across time. We also performed 16-channel intracortical recordings of the local field potential across cortical layers in different brain areas and found tight spatiotemporal concordance with the generators estimated from the epicranial maps. Epicranial EEG mapping thus allows assessing sensory processing by large-scale neuronal networks in living mice with minimal invasiveness, complementing existing approaches to study the neurophysiological mechanisms of interaction within the network in detail and to characterize their developmental, experience-dependent and lesion-induced plasticity in normal and transgenic animals.

  14. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    OpenAIRE

    Kadir Alpaslan DEMIR

    2015-01-01

    TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development ...

  15. How do parcellation size and short-range connectivity affect dynamics in large-scale brain network models?

    Science.gov (United States)

    Proix, Timothée; Spiegler, Andreas; Schirner, Michael; Rothmeier, Simon; Ritter, Petra; Jirsa, Viktor K

    2016-11-15

    Recent efforts to model human brain activity on the scale of the whole brain rest on connectivity estimates of large-scale networks derived from diffusion magnetic resonance imaging (dMRI). This type of connectivity describes white matter fiber tracts. The number of short-range cortico-cortical white-matter connections is, however, underrepresented in such large-scale brain models. It is still unclear on the one hand, which scale of representation of white matter fibers is optimal to describe brain activity on a large-scale such as recorded with magneto- or electroencephalography (M/EEG) or functional magnetic resonance imaging (fMRI), and on the other hand, to which extent short-range connections that are typically local should be taken into account. In this article we quantified the effect of connectivity upon large-scale brain network dynamics by (i) systematically varying the number of brain regions before computing the connectivity matrix, and by (ii) adding generic short-range connections. We used dMRI data from the Human Connectome Project. We developed a suite of preprocessing modules called SCRIPTS to prepare these imaging data for The Virtual Brain, a neuroinformatics platform for large-scale brain modeling and simulations. We performed simulations under different connectivity conditions and quantified the spatiotemporal dynamics in terms of Shannon Entropy, dwell time and Principal Component Analysis. For the reconstructed connectivity, our results show that the major white matter fiber bundles play an important role in shaping slow dynamics in large-scale brain networks (e.g. in fMRI). Faster dynamics such as gamma oscillations (around 40 Hz) are sensitive to the short-range connectivity if transmission delays are considered. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. 2D segmented large inkjet printhead for high speed 3D printers

    International Nuclear Information System (INIS)

    Einat, Moshe; Bar-Levav, Elkana

    2015-01-01

    Three-dimensional (3D) printing is a fast-developing technology these days. However, 3D printing of a model takes many hours. Therefore, the enlargement of the printhead and the increase of the printing speed are important to this technology. In order to enable the enlargement of the printhead a different approach and design are suggested and tested experimentally. The printhead is divided into small segments; each one is autonomous, and not fluid-connected to the neighboring segment. Each segment contains a micro reservoir and few nozzles. The segments are manufactured together in close proximity to each other on the same substrate enabling area coverage. A segmented printhead based on this approach was built and tested. The micro reservoir ink-filling method and operation of the segments were experimentally proven. Ink drops were obtained and the lifetime of the resistors was measured. Electrical characteristics of power and energy for proper operation were obtained. A 3D model printed according to the suggested approach can be completed in less than a minute. (paper)

  17. Evaluation model of project complexity for large-scale construction projects in Iran - A Fuzzy ANP approach

    Directory of Open Access Journals (Sweden)

    Aliyeh Kazemi

    2016-09-01

    Full Text Available Construction projects have always been complex. By growing trend of this complexity, implementations of large-scale constructions become harder. Hence, evaluating and understanding these complexities are critical. Correct evaluation of a project complication can provide executives and managers with good source to use. Fuzzy analytic network process (ANP is a logical and systematic approach toward defining, evaluation, and grading. This method allows for analyzing complex systems, and determining complexity of them. In this study, by taking advantage of fuzzy ANP, effective indexes for development of complications in large-scale construction projects in Iran have been determined and prioritized. The results show socio-political, project system interdependencies, and technological complexity indexes ranked top to three. Furthermore, in comparison of three main huge projects: commercial-administrative, hospital, and skyscrapers, the hospital project had been evaluated as the most complicated. This model is beneficial for professionals in managing large-scale projects.

  18. Large-scale 3-D modeling by integration of resistivity models and borehole data through inversion

    DEFF Research Database (Denmark)

    Foged, N.; Marker, Pernille Aabye; Christiansen, A. V.

    2014-01-01

    and the borehole data set in one variable. Finally, we use k-means clustering to generate a 3-D model of the subsurface structures. We apply the procedure to the Norsminde survey in Denmark, integrating approximately 700 boreholes and more than 100 000 resistivity models from an airborne survey...... in the parameterization of the 3-D model covering 156 km2. The final five-cluster 3-D model differentiates between clay materials and different high-resistivity materials from information held in the resistivity model and borehole observations, respectively....

  19. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...

  20. Estimating Route Choice Models from Stochastically Generated Choice Sets on Large-Scale Networks Correcting for Unequal Sampling Probability

    DEFF Research Database (Denmark)

    Vacca, Alessandro; Prato, Carlo Giacomo; Meloni, Italo

    2015-01-01

    is the dependency of the parameter estimates from the choice set generation technique. Bias introduced in model estimation has been corrected only for the random walk algorithm, which has problematic applicability to large-scale networks. This study proposes a correction term for the sampling probability of routes...

  1. Lichen elemental content bioindicators for air quality in upper Midwest, USA: A model for large-scale monitoring

    Science.gov (United States)

    Susan Will-Wolf; Sarah Jovan; Michael C. Amacher

    2017-01-01

    Our development of lichen elemental bioindicators for a United States of America (USA) national monitoring program is a useful model for other large-scale programs. Concentrations of 20 elements were measured, validated, and analyzed for 203 samples of five common lichen species. Collections were made by trained non-specialists near 75 permanent plots and an expert...

  2. Development of lichen response indexes using a regional gradient modeling approach for large-scale monitoring of forests

    Science.gov (United States)

    Susan Will-Wolf; Peter Neitlich

    2010-01-01

    Development of a regional lichen gradient model from community data is a powerful tool to derive lichen indexes of response to environmental factors for large-scale and long-term monitoring of forest ecosystems. The Forest Inventory and Analysis (FIA) Program of the U.S. Department of Agriculture Forest Service includes lichens in its national inventory of forests of...

  3. Analysis and Design Environment for Large Scale System Models and Collaborative Model Development Project

    Data.gov (United States)

    National Aeronautics and Space Administration — As NASA modeling efforts grow more complex and more distributed among many working groups, new tools and technologies are required to integrate their efforts...

  4. Analysis and Design Environment for Large Scale System Models and Collaborative Model Development, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — As NASA modeling efforts grow more complex and more distributed among many working groups, new tools and technologies are required to integrate their efforts...

  5. Large-scale hydrological modelling in the semi-arid north-east of Brazil

    Science.gov (United States)

    Güntner, Andreas

    2002-07-01

    the framework of an integrated model which contains modules that do not work on the basis of natural spatial units. The target units mentioned above are disaggregated in Wasa into smaller modelling units within a new multi-scale, hierarchical approach. The landscape units defined in this scheme capture in particular the effect of structured variability of terrain, soil and vegetation characteristics along toposequences on soil moisture and runoff generation. Lateral hydrological processes at the hillslope scale, as reinfiltration of surface runoff, being of particular importance in semi-arid environments, can thus be represented also within the large-scale model in a simplified form. Depending on the resolution of available data, small-scale variability is not represented explicitly with geographic reference in Wasa, but by the distribution of sub-scale units and by statistical transition frequencies for lateral fluxes between these units. Further model components of Wasa which respect specific features of semi-arid hydrology are: (1) A two-layer model for evapotranspiration comprises energy transfer at the soil surface (including soil evaporation), which is of importance in view of the mainly sparse vegetation cover. Additionally, vegetation parameters are differentiated in space and time in dependence on the occurrence of the rainy season. (2) The infiltration module represents in particular infiltration-excess surface runoff as the dominant runoff component. (3) For the aggregate description of the water balance of reservoirs that cannot be represented explicitly in the model, a storage approach respecting different reservoirs size classes and their interaction via the river network is applied. (4) A model for the quantification of water withdrawal by water use in different sectors is coupled to Wasa. (5) A cascade model for the temporal disaggregation of precipitation time series, adapted to the specific characteristics of tropical convective rainfall, is applied

  6. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    Directory of Open Access Journals (Sweden)

    E. H. Sutanudjaja

    2011-09-01

    Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  7. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Science.gov (United States)

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  8. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Directory of Open Access Journals (Sweden)

    Xianlei Dong

    Full Text Available Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  9. Angular momentum-large-scale structure alignments in ΛCDM models and the SDSS

    Science.gov (United States)

    Paz, Dante J.; Stasyszyn, Federico; Padilla, Nelson D.

    2008-09-01

    We study the alignments between the angular momentum of individual objects and the large-scale structure in cosmological numerical simulations and real data from the Sloan Digital Sky Survey, Data Release 6 (SDSS-DR6). To this end, we measure anisotropies in the two point cross-correlation function around simulated haloes and observed galaxies, studying separately the one- and two-halo regimes. The alignment of the angular momentum of dark-matter haloes in Λ cold dark matter (ΛCDM) simulations is found to be dependent on scale and halo mass. At large distances (two-halo regime), the spins of high-mass haloes are preferentially oriented in the direction perpendicular to the distribution of matter; lower mass systems show a weaker trend that may even reverse to show an angular momentum in the plane of the matter distribution. In the one-halo term regime, the angular momentum is aligned in the direction perpendicular to the matter distribution; the effect is stronger than for the one-halo term and increases for higher mass systems. On the observational side, we focus our study on galaxies in the SDSS-DR6 with elongated apparent shapes, and study alignments with respect to the major semi-axis. We study five samples of edge-on galaxies; the full SDSS-DR6 edge-on sample, bright galaxies, faint galaxies, red galaxies and blue galaxies (the latter two consisting mainly of ellipticals and spirals, respectively). Using the two-halo term of the projected correlation function, we find an excess of structure in the direction of the major semi-axis for all samples; the red sample shows the highest alignment (2.7 +/- 0.8per cent) and indicates that the angular momentum of flattened spheroidals tends to be perpendicular to the large-scale structure. These results are in qualitative agreement with the numerical simulation results indicating that the angular momentum of galaxies could be built up as in the Tidal Torque scenario. The one-halo term only shows a significant alignment

  10. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    Science.gov (United States)

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  11. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2014-01-01

    Full Text Available This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  12. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  13. Numerical modeling of in-vessel melt water interaction in large scale PWR`s

    Energy Technology Data Exchange (ETDEWEB)

    Kolev, N.I. [Siemens AG, KWU NA-M, Erlangen (Germany)

    1998-01-01

    This paper presents a comparison between IVA4 simulations and FARO L14, L20 experiments. Both experiments were performed with the same geometry but under different initial pressures, 51 and 20 bar respectively. A pretest prediction for test L21 which is intended to be performed under an initial pressure of 5 bar is also presented. The strong effect of the volume expansion of the evaporating water at low pressure is demonstrated. An in-vessel simulation for a 1500 MW el. PWR is presented. The insight gained from this study is: that at no time are conditions for the feared large scale melt-water intermixing at low pressure in force, with this due to the limiting effect of the expansion process which accelerates the melt and the water into all available flow paths. (author)

  14. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    OpenAIRE

    Tang Xiaofeng; Gao Feng; Xu Guoyan; Ding Nenggen; Cai Yao; Liu Jian Xing

    2014-01-01

    The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimi...

  15. An OMNeT++ model of the control system of large-scale concentrator photovoltaic power plants: Poster abstract

    OpenAIRE

    Benoit, P.; Fey, S.; Rohbogner, G.; Kreifels, N.; Kohrs, R.

    2013-01-01

    The communication system of a large-scale concentrator photovoltaic power plant is very challenging. Manufacturers are building power plants having thousands of sun tracking systems equipped with communication and distributed over a wide area. Research is necessary to build a scalable communication system enabling modern control strategies. This poster abstract describes the ongoing work on the development of a simulation model of such power plants in OMNeT++. The model uses the INET Framewor...

  16. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  17. Integrating an agent-based model into a large-scale hydrological model for evaluating drought management in California

    Science.gov (United States)

    Sheffield, J.; He, X.; Wada, Y.; Burek, P.; Kahil, M.; Wood, E. F.; Oppenheimer, M.

    2017-12-01

    California has endured record-breaking drought since winter 2011 and will likely experience more severe and persistent drought in the coming decades under changing climate. At the same time, human water management practices can also affect drought frequency and intensity, which underscores the importance of human behaviour in effective drought adaptation and mitigation. Currently, although a few large-scale hydrological and water resources models (e.g., PCR-GLOBWB) consider human water use and management practices (e.g., irrigation, reservoir operation, groundwater pumping), none of them includes the dynamic feedback between local human behaviors/decisions and the natural hydrological system. It is, therefore, vital to integrate social and behavioral dimensions into current hydrological modeling frameworks. This study applies the agent-based modeling (ABM) approach and couples it with a large-scale hydrological model (i.e., Community Water Model, CWatM) in order to have a balanced representation of social, environmental and economic factors and a more realistic representation of the bi-directional interactions and feedbacks in coupled human and natural systems. In this study, we focus on drought management in California and considers two types of agents, which are (groups of) farmers and state management authorities, and assumed that their corresponding objectives are to maximize the net crop profit and to maintain sufficient water supply, respectively. Farmers' behaviors are linked with local agricultural practices such as cropping patterns and deficit irrigation. More precisely, farmers' decisions are incorporated into CWatM across different time scales in terms of daily irrigation amount, seasonal/annual decisions on crop types and irrigated area as well as the long-term investment of irrigation infrastructure. This simulation-based optimization framework is further applied by performing different sets of scenarios to investigate and evaluate the effectiveness

  18. Analysis of effectiveness of possible queuing models at gas stations using the large-scale queuing theory

    Directory of Open Access Journals (Sweden)

    Slaviša M. Ilić

    2011-10-01

    Full Text Available This paper analyzes the effectiveness of possible models for queuing at gas stations, using a mathematical model of the large-scale queuing theory. Based on actual data collected and the statistical analysis of the expected intensity of vehicle arrivals and queuing at gas stations, the mathematical modeling of the real process of queuing was carried out and certain parameters quantified, in terms of perception of the weaknesses of the existing models and the possible benefits of an automated queuing model.

  19. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  20. Dynamic Modeling and Analysis of the Large-Scale Rotary Machine with Multi-Supporting

    Directory of Open Access Journals (Sweden)

    Xuejun Li

    2011-01-01

    Full Text Available The large-scale rotary machine with multi-supporting, such as rotary kiln and rope laying machine, is the key equipment in the architectural, chemistry, and agriculture industries. The body, rollers, wheels, and bearings constitute a chain multibody system. Axis line deflection is a vital parameter to determine mechanics state of rotary machine, thus body axial vibration needs to be studied for dynamic monitoring and adjusting of rotary machine. By using the Riccati transfer matrix method, the body system of rotary machine is divided into many subsystems composed of three elements, namely, rigid disk, elastic shaft, and linear spring. Multiple wheel-bearing structures are simplified as springs. The transfer matrices of the body system and overall transfer equation are developed, as well as the response overall motion equation. Taken a rotary kiln as an instance, natural frequencies, modal shape, and response vibration with certain exciting axis line deflection are obtained by numerical computing. The body vibration modal curves illustrate the cause of dynamical errors in the common axis line measurement methods. The displacement response can be used for further measurement dynamical error analysis and compensation. The response overall motion equation could be applied to predict the body motion under abnormal mechanics condition, and provide theory guidance for machine failure diagnosis.

  1. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    Science.gov (United States)

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  2. Using SMOS for validation and parameter estimation of a large scale hydrological model in Paraná river basin

    Science.gov (United States)

    Colossi, Bibiana; Fleischmann, Ayan; Siqueira, Vinicius; Bitar, Ahmad Al; Paiva, Rodrigo; Fan, Fernando; Ruhoff, Anderson; Pontes, Paulo; Collischonn, Walter

    2017-04-01

    Large scale representation of soil moisture conditions can be achieved through hydrological simulation and remote sensing techniques. However, both methodologies have several limitations, which suggests the potential benefits of using both information together. So, this study had two main objectives: perform a cross-validation between remotely sensed soil moisture from SMOS (Soil Moisture and Ocean Salinity) L3 product and soil moisture simulated with the large scale hydrological model MGB-IPH; and to evaluate the potential benefits of including remotely sensed soil moisture for model parameter estimation. The study analyzed results in South American continent, where hydrometeorological monitoring is usually scarce. The study was performed in Paraná River Basin, an important South American basin, whose extension and particular characteristics allow the representation of different climatic, geological, and, consequently, hydrological conditions. Soil moisture estimated with SMOS was transformed from water content to a Soil Water Index (SWI) so it is comparable to the saturation degree simulated with MGB-IPH model. The multi-objective complex evolution algorithm (MOCOM-UA) was applied for model automatic calibration considering only remotely sensed soil moisture, only discharge and both information together. Results show that this type of analysis can be very useful, because it allows to recognize limitations in model structure. In the case of the hydrological model calibration, this approach can avoid the use of parameters out of range, in an attempt to compensate model limitations. Also, it indicates aspects of the model were efforts should be concentrated, in order to improve hydrological or hydraulics process representation. Automatic calibration gives an estimative about the way different information can be applied and the quality of results it might lead. We emphasize that these findings can be valuable for hydrological modeling in large scale South American

  3. UAS in the NAS Project: Large-Scale Communication Architecture Simulations with NASA GRC Gen5 Radio Model

    Science.gov (United States)

    Kubat, Gregory

    2016-01-01

    This report provides a description and performance characterization of the large-scale, Relay architecture, UAS communications simulation capability developed for the NASA GRC, UAS in the NAS Project. The system uses a validated model of the GRC Gen5 CNPC, Flight-Test Radio model. Contained in the report is a description of the simulation system and its model components, recent changes made to the system to improve performance, descriptions and objectives of sample simulations used for test and verification, and a sampling and observations of results and performance data.

  4. Large-Scale Flows and Magnetic Fields Produced by Rotating Convection in a Quasi-Geostrophic Model of Planetary Cores

    Science.gov (United States)

    Guervilly, C.; Cardin, P.

    2017-12-01

    Convection is the main heat transport process in the liquid cores of planets. The convective flows are thought to be turbulent and constrained by rotation (corresponding to high Reynolds numbers Re and low Rossby numbers Ro). Under these conditions, and in the absence of magnetic fields, the convective flows can produce coherent Reynolds stresses that drive persistent large-scale zonal flows. The formation of large-scale flows has crucial implications for the thermal evolution of planets and the generation of large-scale magnetic fields. In this work, we explore this problem with numerical simulations using a quasi-geostrophic approximation to model convective and zonal flows at Re 104 and Ro 10-4 for Prandtl numbers relevant for liquid metals (Pr 0.1). The formation of intense multiple zonal jets strongly affects the convective heat transport, leading to the formation of a mean temperature staircase. We also study the generation of magnetic fields by the quasi-geostrophic flows at low magnetic Prandtl numbers.

  5. Computer studies on dynamics of a large-scale magnetic loop by the spontaneous fast reconnection model

    Science.gov (United States)

    Ugai, M.

    1996-11-01

    The temporal dynamics of a large-scale magnetic loop is numerically studied on the basis of the two-dimensional spontaneous fast reconnection model. When a plasmoid, caused by the fast reconnection, propagates and collides with a wall boundary, across which plasma cannot flow, a large-scale magnetic loop is formed. The resulting magnetic loop is constructed by the reconnected field lines; inside the loop, the plasma, initially residing in the current sheet, is confined. As the reconnected field lines are piled up, the magnetic loop grows and swells outwards, so that a strong fast shock suddenly builds up at the interface between the growing loop and the strong reconnection jet. The fast shock, located ahead of the loop top, moves outwards with the growing loop, changing its strength with several peak and bottom Mach numbers. Accordingly, a localized spot-like region, where the plasma pressure is extremely enhanced, definitely comes out immediately ahead of the loop top. Along the loop side boundary, slow shocks stand, so that the resulting large-scale magnetic loop provides a very powerful energy converter in the sense that it is enclosed by slow and fast shocks.

  6. Influence of weathering and pre-existing large scale fractures on gravitational slope failure: insights from 3-D physical modelling

    Directory of Open Access Journals (Sweden)

    D. Bachmann

    2004-01-01

    Full Text Available Using a new 3-D physical modelling technique we investigated the initiation and evolution of large scale landslides in presence of pre-existing large scale fractures and taking into account the slope material weakening due to the alteration/weathering. The modelling technique is based on the specially developed properly scaled analogue materials, as well as on the original vertical accelerator device enabling increases in the 'gravity acceleration' up to a factor 50. The weathering primarily affects the uppermost layers through the water circulation. We simulated the effect of this process by making models of two parts. The shallower one represents the zone subject to homogeneous weathering and is made of low strength material of compressive strength σl. The deeper (core part of the model is stronger and simulates intact rocks. Deformation of such a model subjected to the gravity force occurred only in its upper (low strength layer. In another set of experiments, low strength (σw narrow planar zones sub-parallel to the slope surface (σwl were introduced into the model's superficial low strength layer to simulate localized highly weathered zones. In this configuration landslides were initiated much easier (at lower 'gravity force', were shallower and had smaller horizontal size largely defined by the weak zone size. Pre-existing fractures were introduced into the model by cutting it along a given plan. They have proved to be of small influence on the slope stability, except when they were associated to highly weathered zones. In this latter case the fractures laterally limited the slides. Deep seated rockslides initiation is thus directly defined by the mechanical structure of the hillslope's uppermost levels and especially by the presence of the weak zones due to the weathering. The large scale fractures play a more passive role and can only influence the shape and the volume of the sliding units.

  7. The use of remotely sensed soil moisture data in large-scale models of the hydrological cycle

    Science.gov (United States)

    Salomonson, V. V.; Gurney, R. J.; Schmugge, T. J.

    1985-01-01

    Manabe (1982) has reviewed numerical simulations of the atmosphere which provided a framework within which an examination of the dynamics of the hydrological cycle could be conducted. It was found that the climate is sensitive to soil moisture variability in space and time. The challenge arises now to improve the observations of soil moisture so as to provide up-dated boundary condition inputs to large scale models including the hydrological cycle. Attention is given to details regarding the significance of understanding soil moisture variations, soil moisture estimation using remote sensing, and energy and moisture balance modeling.

  8. Applicability of multiple yield model to earthquake response analysis for foundation rock of large-scale structure

    International Nuclear Information System (INIS)

    Yoshinaka, Ryunoshin; Iwata, Naoki; Sasaki, Takeshi

    2012-01-01

    The authors are analyzed the large-scale structure build on the discontinuous rock foundation by earthquake response analysis with non-linear FEM considering the rock joint system using the actual earthquake record. The earthquake response analysis was performed by equivalent continuum finite element method as Multiple Yield Model (MYM) introducing cyclic loading elastic-plastic deformation characteristics of rock joints, and the analytical results were compared with the observed earthquake response. As a result, adequate modeling of discontinuities and appropriate setting of mechanical properties of rock and discontinuities give the good results corresponding with the observations. We confirmed the applicability of MYM to earthquake response. (author)

  9. The Effects of Uncertainty in Speed-Flow Curve Parameters on a Large-Scale Model

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2014-01-01

    Uncertainty is inherent in transport models and prevents the use of a deterministic approach when traffic is modeled. Quantifying uncertainty thus becomes an indispensable step to produce a more informative and reliable output of transport models. In traffic assignment models, volume-delay functi...

  10. Systems Execution Modeling Technologies for Large-Scale Net-Centric Department of Defense Systems

    Science.gov (United States)

    2011-12-01

    problem, the design groups may use different constraint specific models, such as Ptolemy , RT-Maude, Excel, or UML, to model design constraints and...different types. For example, one group may use a Ptolemy model for analyzing fault tolerance requirements while another person may use an Excel model for

  11. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  12. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  13. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  14. Large-Scale Features of Pliocene Climate: Results from the Pliocene Model Intercomparison Project

    Science.gov (United States)

    Haywood, A. M.; Hill, D.J.; Dolan, A. M.; Otto-Bliesner, B. L.; Bragg, F.; Chan, W.-L.; Chandler, M. A.; Contoux, C.; Dowsett, H. J.; Jost, A.; hide

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied.Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-mode data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses). Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS), support previous work suggesting that ESS is greater than Climate Sensitivity (CS), and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  15. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    Science.gov (United States)

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  16. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  17. Flexible non-linear predictive models for large-scale wind turbine diagnostics

    DEFF Research Database (Denmark)

    Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

    2017-01-01

    We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....

  18. The application of sensitivity analysis to models of large scale physiological systems

    Science.gov (United States)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  19. An application of a large scale conceptual hydrological model over the Elbe region

    Directory of Open Access Journals (Sweden)

    M. Lobmeyr

    1999-01-01

    Full Text Available This paper investigates the ability of the VIC-2L model coupled to a routing model to reproduce streamflow in the catchment of the lower Elbe River, Germany. The VIC-2L model, a hydrologically-based land surface scheme (LSS which has been tested extensively in the Project for Intercomparison of Land-surface Parameterization Schemes (PILPS, is put up on the rotated grid of 1/6 degree of the atmospheric regional scale model (REMO used in the Baltic Sea Experiment (BALTEX. For a 10 year period, the VIC-2L model is forced in daily time steps with measured daily means of precipitation, air temperature, pressure, wind speed, air humidity and daily sunshine duration. VIC-2L model output of surface runoff and baseflow is used as input for the routing model, which transforms modelled runoff into streamflow, which is compared to measured streamflow at selected gauge stations. The water balance of the basin is investigated and the model results on daily, monthly and annual time scales are discussed. Discrepancies appear in time periods where snow and ice processes are important. Extreme flood events are analyzed in more dital. The influence of calibration with respect to runoff is examined.

  20. On the assimilation of ice velocity and concentration data into large-scale sea ice models

    Directory of Open Access Journals (Sweden)

    V. Dulière

    2007-06-01

    Full Text Available Data assimilation into sea ice models designed for climate studies has started about 15 years ago. In most of the studies conducted so far, it is assumed that the improvement brought by the assimilation is straightforward. However, some studies suggest this might not be true. In order to elucidate this question and to find an appropriate way to further assimilate sea ice concentration and velocity observations into a global sea ice-ocean model, we analyze here results from a number of twin experiments (i.e. experiments in which the assimilated data are model outputs carried out with a simplified model of the Arctic sea ice pack. Our objective is to determine to what degree the assimilation of ice velocity and/or concentration data improves the global performance of the model and, more specifically, reduces the error in the computed ice thickness. A simple optimal interpolation scheme is used, and outputs from a control run and from perturbed experiments without and with data assimilation are thoroughly compared. Our results indicate that, under certain conditions depending on the assimilation weights and the type of model error, the assimilation of ice velocity data enhances the model performance. The assimilation of ice concentration data can also help in improving the model behavior, but it has to be handled with care because of the strong connection between ice concentration and ice thickness. This study is first step towards real data assimilation into NEMO-LIM, a global sea ice-ocean model.

  1. Assessment of large-scale water storage dynamics in the Community Land Model

    Science.gov (United States)

    Swenson, S. C.; Lawrence, D. M.

    2015-12-01

    A fundamental task of the Community Land Model (CLM; the land component of the Community Earth System Model) is the partitioning of precipitation into evapotranspiration (ET), runoff, and water storage. Testing model performance against site-level observations provides important insight, but can be challenging to extrapolate to the larger spatial scales at which Earth System models typically operate. Traditionally, measurements of river discharge have provided the best, and in many cases only, metrics with which to assess the performance of land models at large spatial scales (i.e. regional to continental scale river basins). Because the quantity of discharge measurements has declined globally, and the human modification and management of rivers has increased, new methods of testing land model performance are needed. As global observations of total water storage (TWS) and ET have become available, the potential for direct assessment of the quality of the simulated water budget exists. In this presentation, we use TWS observations from the GRACE satellite project and upscaled flux tower measurements from the FLUXNET-MTE dataset to assess the performance of CLM parameterizations such as canopy interception, storage, and evaporation, soil evaporation, and soil moisture and groundwater dynamics. We then give examples of alternative model parameterizations, and show how these parameterizations improve model performance relative to GRACE and FLUXNET-MTE based metrics.

  2. Potentials and limitations of using large-scale forest inventory data for evaluating forest succession models

    NARCIS (Netherlands)

    Didion, M.P.; Kupferschmid, A.D.; Lexer, M.J.; Rammer, W.; Seidl, R.; Bugmann, H.

    2009-01-01

    Forest gap models have been applied widely to examine forest development under natural conditions and to investigate the effect of climate change on forest succession. Due to the complexity and parameter requirements of such models a rigorous evaluation is required to build confidence in the

  3. Simulated pre-industrial climate in Bergen Climate Model (version 2: model description and large-scale circulation features

    Directory of Open Access Journals (Sweden)

    O. H. Otterå

    2009-11-01

    Full Text Available The Bergen Climate Model (BCM is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  4. Large-scale parameter extraction in electrocardiology models through Born approximation

    KAUST Repository

    He, Yuan

    2012-12-04

    One of the main objectives in electrocardiology is to extract physical properties of cardiac tissues from measured information on electrical activity of the heart. Mathematically, this is an inverse problem for reconstructing coefficients in electrocardiology models from partial knowledge of the solutions of the models. In this work, we consider such parameter extraction problems for two well-studied electrocardiology models: the bidomain model and the FitzHugh-Nagumo model. We propose a systematic reconstruction method based on the Born approximation of the original nonlinear inverse problem. We describe a two-step procedure that allows us to reconstruct not only perturbations of the unknowns, but also the backgrounds around which the linearization is performed. We show some numerical simulations under various conditions to demonstrate the performance of our method. We also introduce a parameterization strategy using eigenfunctions of the Laplacian operator to reduce the number of unknowns in the parameter extraction problem. © 2013 IOP Publishing Ltd.

  5. Modeling of a Large-Scale High Temperature Regenerative Sulfur Removal Process

    DEFF Research Database (Denmark)

    Konttinen, Jukka T.; Johnsson, Jan Erik

    1999-01-01

    model that does not account for bed hydrodynamics. The pilot-scale test run results, obtained in the test runs of the sulfur removal process with real coal gasifier gas, have been used for parameter estimation. The validity of the reactor model for commercial-scale design applications is discussed.......Regenerable mixed metal oxide sorbents are prime candidates for the removal of hydrogen sulfide from hot gasifier gas in the simplified integrated gasification combined cycle (IGCC) process. As part of the regenerative sulfur removal process development, reactor models are needed for scale......-up. Steady-state kinetic reactor models are needed for reactor sizing, and dynamic models can be used for process control design and operator training. The regenerative sulfur removal process to be studied in this paper consists of two side-by-side fluidized bed reactors operating at temperatures of 400...

  6. Modeling Cultural/ecological Impacts of Large-scale Mining and Industrial Development in the Yukon-Kuskokwim Basin

    Science.gov (United States)

    Bunn, J. T.; Sparck, A.

    2004-12-01

    We are developing a methodology for predicting the cultural impact of large-scale mineral resource development in the Yukon-Kuskokwim (Y-K) basin. The Yup'ik/Cup'ik/Dene people of the Y-K basin currently practice a mixed-market subsistence economy, in which native subsistence traditions and social structures are largely intact. Large-scale mining and industrial-infrastructure developments are being planned that will constitute a significant expansion of the market economy, and will also significantly affect the physical environment that is central to the subsistence way of life. To explore the impact that these changes are likely to have on native culture we use a systems modeling approach, considering "culture" to be a system that encompasses the physical, biological and verbal realms. We draw upon Alaska Department of Fish and Game technical reports, anthropological studies, Yup'ik cultural visioning exercises, and personal experience to identify the components of our cultural model. We use structural equation modeling to determine causal relationships between system components. The resulting model is used predict changes that are likely to occur as a result of planned developments.

  7. Exploring large-scale phenomena in composite membranes through an efficient implicit-solvent model

    Science.gov (United States)

    Laradji, Mohamed; Kumar, P. B. Sunil; Spangler, Eric J.

    2016-07-01

    Several microscopic and mesoscale models have been introduced in the past to investigate various phenomena in lipid membranes. Most of these models account for the solvent explicitly. Since in a typical molecular dynamics simulation, the majority of particles belong to the solvent, much of the computational effort in these simulations is devoted for calculating forces between solvent particles. To overcome this problem, several implicit-solvent mesoscale models for lipid membranes have been proposed during the last few years. In the present article, we review an efficient coarse-grained implicit-solvent model we introduced earlier for studies of lipid membranes. In this model, lipid molecules are coarse-grained into short semi-flexible chains of beads with soft interactions. Through molecular dynamics simulations, the model is used to investigate the thermal, structural and elastic properties of lipid membranes. We will also review here few studies, based on this model, of the phase behavior of nanoscale liposomes, cytoskeleton-induced blebbing in lipid membranes, as well as nanoparticles wrapping and endocytosis by tensionless lipid membranes. Topical Review article submitted to the Journal of Physics D: Applied Physics, May 9, 2016

  8. Exploring large-scale phenomena in composite membranes through an efficient implicit-solvent model

    International Nuclear Information System (INIS)

    Laradji, Mohamed; Sunil Kumar, P B; Spangler, Eric J

    2016-01-01

    Several microscopic and mesoscale models have been introduced in the past to investigate various phenomena in lipid membranes. Most of these models account for the solvent explicitly. Since in a typical molecular dynamics simulation, the majority of particles belong to the solvent, much of the computational effort in these simulations is devoted for calculating forces between solvent particles. To overcome this problem, several implicit-solvent mesoscale models for lipid membranes have been proposed during the last few years. In the present article, we review an efficient coarse-grained implicit-solvent model we introduced earlier for studies of lipid membranes. In this model, lipid molecules are coarse-grained into short semi-flexible chains of beads with soft interactions. Through molecular dynamics simulations, the model is used to investigate the thermal, structural and elastic properties of lipid membranes. We will also review here few studies, based on this model, of the phase behavior of nanoscale liposomes, cytoskeleton-induced blebbing in lipid membranes, as well as nanoparticles wrapping and endocytosis by tensionless lipid membranes. (topical review)

  9. Large-scale Modeling of the Greenland Ice Sheet on Long Timescales

    DEFF Research Database (Denmark)

    Solgaard, Anne Munck

    this threshold towards colder temperatures in line with a recent study, but the new threshold value depends on the choice of method. It was found using the adaptive patterns that the Greenland ice sheet can reform under present-day conditions. A further study where additional coupling between the ice-sheet model...... is investigated as well as its early history. The studies are performed using an ice-sheet model in combination with relevant forcing from observed and modeled climate. Changes in ice-sheet geometry influences atmospheric flow (and vice versa) hereby changing the forcing patterns. Changes in the overall climate...... for the build-up of the Greenland ice sheet that lead to the intensification of the Northern Hemisphere glaciations at the end of the Pliocene. A study of output from the climate model, EC-EARTH, reveals some of the challenges faced when using this to force ice-sheet evolution or when full coupling of ice...

  10. Large-scale Modeling of the Greenland Ice Sheet on Long Timescales

    DEFF Research Database (Denmark)

    Solgaard, Anne Munck

    is investigated as well as its early history. The studies are performed using an ice-sheet model in combination with relevant forcing from observed and modeled climate. Changes in ice-sheet geometry influences atmospheric flow (and vice versa) hereby changing the forcing patterns. Changes in the overall climate...... for the build-up of the Greenland ice sheet that lead to the intensification of the Northern Hemisphere glaciations at the end of the Pliocene. A study of output from the climate model, EC-EARTH, reveals some of the challenges faced when using this to force ice-sheet evolution or when full coupling of ice...... also alter the patterns. On this basis, output from a climate model is used to construct adaptive forcing patterns that are computationally fast and takes into account that the patterns respond to changes in a non-uniform way both spatially and temporally. The adaptive patterns were applied to study...

  11. Topology of large-scale structure in seeded hot dark matter models

    Science.gov (United States)

    Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.

    1992-01-01

    The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.

  12. Comparing selected morphological models of hydrated Nafion using large scale molecular dynamics simulations

    Science.gov (United States)

    Knox, Craig K.

    Experimental elucidation of the nanoscale structure of hydrated Nafion, the most popular polymer electrolyte or proton exchange membrane (PEM) to date, and its influence on macroscopic proton conductance is particularly challenging. While it is generally agreed that hydrated Nafion is organized into distinct hydrophilic domains or clusters within a hydrophobic matrix, the geometry and length scale of these domains continues to be debated. For example, at least half a dozen different domain shapes, ranging from spheres to cylinders, have been proposed based on experimental SAXS and SANS studies. Since the characteristic length scale of these domains is believed to be ˜2 to 5 nm, very large molecular dynamics (MD) simulations are needed to accurately probe the structure and morphology of these domains, especially their connectivity and percolation phenomena at varying water content. Using classical, all-atom MD with explicit hydronium ions, simulations have been performed to study the first-ever hydrated Nafion systems that are large enough (~2 million atoms in a ˜30 nm cell) to directly observe several hydrophilic domains at the molecular level. These systems consisted of six of the most significant and relevant morphological models of Nafion to-date: (1) the cluster-channel model of Gierke, (2) the parallel cylinder model of Schmidt-Rohr, (3) the local-order model of Dreyfus, (4) the lamellar model of Litt, (5) the rod network model of Kreuer, and (6) a 'random' model, commonly used in previous simulations, that does not directly assume any particular geometry, distribution, or morphology. These simulations revealed fast intercluster bridge formation and network percolation in all of the models. Sulfonates were found inside these bridges and played a significant role in percolation. Sulfonates also strongly aggregated around and inside clusters. Cluster surfaces were analyzed to study the hydrophilic-hydrophobic interface. Interfacial area and cluster volume

  13. Large-Scale Physical Models of Thermal Remediation of DNAPL Source Zones in Aquitards

    Science.gov (United States)

    2009-05-01

    15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 250 19a. NAME OF RESPONSIBLE PERSON a...bioremediation or microbial-enhanced oil recovery. A geomechanical model is also included that can model compaction or fracture formation in porous media...one uniform grain size distribution chart for easy comparison between the different products (Fig. 3). Table 1. Grain size classification

  14. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Science.gov (United States)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-11-01

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  15. Uncovering Implicit Assumptions: a Large-Scale Study on Students' Mental Models of Diffusion

    Science.gov (United States)

    Stains, Marilyne; Sevian, Hannah

    2015-12-01

    Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in the gaseous medium. A large sample of data ( N = 423) from students across grade 8 (age 13) through upper-level undergraduate was subjected to a cluster analysis to determine the main mental models present. The cluster analysis resulted in a reduced data set ( N = 308), and then, mental models were ascertained from robust clusters. The mental models that emerged from analysis were triangulated through interview data and characterised according to underlying implicit assumptions that guide and constrain thinking about diffusion of a solute in a gaseous medium. Impacts of students' level of preparation in science and relationships of mental models to science disciplines studied by students were examined. Implications are discussed for the value of this approach to identify typical mental models and the sets of implicit assumptions that constrain them.

  16. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Energy Technology Data Exchange (ETDEWEB)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-10-18

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy–galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  17. Hydrological improvements for nutrient and pollutant emission modeling in large scale catchments

    Science.gov (United States)

    Höllering, S.; Ihringer, J.

    2012-04-01

    An estimation of emissions and loads of nutrients and pollutants into European water bodies with as much accuracy as possible depends largely on the knowledge about the spatially and temporally distributed hydrological runoff patterns. An improved hydrological water balance model for the pollutant emission model MoRE (Modeling of Regionalized Emissions) (IWG, 2011) has been introduced, that can form an adequate basis to simulate discharge in a hydrologically differentiated, land-use based way to subsequently provide the required distributed discharge components. First of all the hydrological model had to comply both with requirements of space and time in order to calculate sufficiently precise the water balance on the catchment scale spatially distributed in sub-catchments and with a higher temporal resolution. Aiming to reproduce seasonal dynamics and the characteristic hydrological regimes of river catchments a daily (instead of a yearly) time increment was applied allowing for a more process oriented simulation of discharge dynamics, volume and therefore water balance. The enhancement of the hydrological model became also necessary to potentially account for the hydrological functioning of catchments in regard to scenarios of e.g. a changing climate or alterations of land use. As a deterministic, partly physically based, conceptual hydrological watershed and water balance model the Precipitation Runoff Modeling System (PRMS) (USGS, 2009) was selected to improve the hydrological input for MoRE. In PRMS the spatial discretization is implemented with sub-catchments and so called hydrologic response units (HRUs) which are the hydrotropic, distributed, finite modeling entities each having a homogeneous runoff reaction due to hydro-meteorological events. Spatial structures and heterogeneities in sub-catchments e.g. urbanity, land use and soil types were identified to derive hydrological similarities and classify in different urban and rural HRUs. In this way the

  18. Large-Scale Modelling of the Environmentally-Driven Population Dynamics of Temperate Aedes albopictus (Skuse.

    Directory of Open Access Journals (Sweden)

    Kamil Erguler

    Full Text Available The Asian tiger mosquito, Aedes albopictus, is a highly invasive vector species. It is a proven vector of dengue and chikungunya viruses, with the potential to host a further 24 arboviruses. It has recently expanded its geographical range, threatening many countries in the Middle East, Mediterranean, Europe and North America. Here, we investigate the theoretical limitations of its range expansion by developing an environmentally-driven mathematical model of its population dynamics. We focus on the temperate strain of Ae. albopictus and compile a comprehensive literature-based database of physiological parameters. As a novel approach, we link its population dynamics to globally-available environmental datasets by performing inference on all parameters. We adopt a Bayesian approach using experimental data as prior knowledge and the surveillance dataset of Emilia-Romagna, Italy, as evidence. The model accounts for temperature, precipitation, human population density and photoperiod as the main environmental drivers, and, in addition, incorporates the mechanism of diapause and a simple breeding site model. The model demonstrates high predictive skill over the reference region and beyond, confirming most of the current reports of vector presence in Europe. One of the main hypotheses derived from the model is the survival of Ae. albopictus populations through harsh winter conditions. The model, constrained by the environmental datasets, requires that either diapausing eggs or adult vectors have increased cold resistance. The model also suggests that temperature and photoperiod control diapause initiation and termination differentially. We demonstrate that it is possible to account for unobserved properties and constraints, such as differences between laboratory and field conditions, to derive reliable inferences on the environmental dependence of Ae. albopictus populations.

  19. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys.

    Directory of Open Access Journals (Sweden)

    Jussi Jousimo

    Full Text Available Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov-Malyshev-Pereleshin (FMP estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method.

  20. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys

    Science.gov (United States)

    Jousimo, Jussi; Ovaskainen, Otso

    2016-01-01

    Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov–Malyshev–Pereleshin (FMP) estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method. PMID:27611683

  1. Incorporating the subgrid-scale variability of clouds in the autoconversion parameterization in a large-scale model

    Science.gov (United States)

    Weber, Torsten; Quaas, Johannes

    2010-05-01

    Precipitation formation in warm clouds is mainly governed by the autoconversion rate being a high nonlinear process. Large scale models commonly calculate the autoconversion rate using the grid-cell mean of liquid cloud water which introduces a strong low-bias because clouds and therefore liquid cloud water are inhomogeneous distributed. The parameterized autoconversion is thus artificially tuned so that accumulated large-scale precipitation matches the observations. Here, we revise the parameterization for the autoconversion rate to incorporate the subgrid-scale variability of clouds using the horizontal subgrid-scale distribution of liquid cloud water mixing ratio derived from the subgrid-scale variability scheme of water vapor and cloud condensate. This scheme is employed in the ECHAM5 climate model in order to calculate the horizontal cloud fraction by means of a probability density function (PDF) of the total water mixing ratio. The revised parameterization now also ensures the consistency between the calculation of horizontal cloud fraction and the precipitation formation. An introduction of the improved parameterization and first results of the evaluation of the precipitation rate on a global scale will be presented. Specifically, precipitation and vertically integrated liquid cloud water estimated by the model are compared with observational data derived from ground based measurements and satellite instruments.

  2. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  3. Transforming GIS data into functional road models for large-scale traffic simulation.

    Science.gov (United States)

    Wilkie, David; Sewall, Jason; Lin, Ming C

    2012-06-01

    There exists a vast amount of geographic information system (GIS) data that model road networks around the world as polylines with attributes. In this form, the data are insufficient for applications such as simulation and 3D visualization-tools which will grow in power and demand as sensor data become more pervasive and as governments try to optimize their existing physical infrastructure. In this paper, we propose an efficient method for enhancing a road map from a GIS database to create a geometrically and topologically consistent 3D model to be used in real-time traffic simulation, interactive visualization of virtual worlds, and autonomous vehicle navigation. The resulting representation provides important road features for traffic simulations, including ramps, highways, overpasses, legal merge zones, and intersections with arbitrary states, and it is independent of the simulation methodologies. We test the 3D models of road networks generated by our algorithm on real-time traffic simulation using both macroscopic and microscopic techniques.

  4. Quantification of key parameters for treating contrails in a large scale climate model

    Energy Technology Data Exchange (ETDEWEB)

    Ponater, M.; Gierens, K. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere

    1997-12-01

    The general objective of this project, to determine contrail key parameters with respect to their climate effect, has been approached by three tasks: (1) quantification of microphysical key parameters, (2) development of a contrail coverage parametrization for climate models, and (3) determination of the worldwide coverage with persistent contrails due to present day air traffic. The microphysical key parameters are determined using microphysical box model simulations. The contrail parametrization was achieved by deriving (from aircraft measurements) the instantaneous fluctuations of temperature and relative humidity that occur on spatial scales beyond the resolution of climate models. The global and annual mean coverage by persistent contrails was calculated from ECMWF numerical analyses and from actual air traffic density. It was found to be currently about 0.1%, though the atmosphere has the potential to form persistent contrails over a much larger area. (orig.) 144 figs., 42 tabs., 497 refs.

  5. A Large-Scale Multibody Manipulator Soft Sensor Model and Experiment Validation

    Directory of Open Access Journals (Sweden)

    Wu Ren

    2014-01-01

    Full Text Available Stress signal is difficult to obtain in the health monitoring of multibody manipulator. In order to solve this problem, a soft sensor method is presented. In the method, stress signal is considered as dominant variable and angle signal is regarded as auxiliary variable. By establishing the mathematical relationship between them, a soft sensor model is proposed. In the model, the stress information can be deduced by angle information which can be easily measured for such structures by experiments. Finally, test of ground and wall working conditions is done on a multibody manipulator test rig. The results show that the stress calculated by the proposed method is closed to the test one. Thus, the stress signal is easier to get than the traditional method. All of these prove that the model is correct and the method is feasible.

  6. Norway's 2011 Terror Attacks: Alleviating National Trauma With a Large-Scale Proactive Intervention Model.

    Science.gov (United States)

    Kärki, Freja Ulvestad

    2015-09-01

    After the terror attacks of July 22, 2011, Norwegian health authorities piloted a new model for municipality-based psychosocial follow-up with victims. This column describes the development of a comprehensive follow-up intervention by health authorities and others that has been implemented at the municipality level across Norway. The model's principles emphasize proactivity by service providers; individually tailored help, with each victim being assigned a contact person in the residential municipality; continuity and long-term focus; effective intersectorial collaboration; and standardized screening of symptoms during the first year. Weekend reunions were also organized for the bereaved, and one-day reunions were organized for the survivors and their families at intervals over the first 18 months. Preliminary findings indicate a high level of success in model implementation. However, the overall effect of the interventions will be a subject for future evaluations.

  7. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    International Nuclear Information System (INIS)

    Uehara, Takeki; Minowa, Yohsuke; Morikawa, Yuji; Kondo, Chiaki; Maruyama, Toshiyuki; Kato, Ikuo; Nakatsu, Noriyuki; Igarashi, Yoshinobu; Ono, Atsushi; Hayashi, Hitomi; Mitsumori, Kunitoshi; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2011-01-01

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificity in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: →We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. →The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity.

  8. Forced vibration test on large scale model on soft rock site

    International Nuclear Information System (INIS)

    Kobayashi, Toshio; Fukuoka, Atsunobu; Izumi, Masanori; Miyamoto, Yuji; Ohtsuka, Yasuhiro; Nasuda, Toshiaki.

    1991-01-01

    Forced vibration tests were conducted in order to investigate the embedment effect on dynamic soil-structure interaction. Two model structures were constructed on actual soil about 60 m apart, after excavating the ground to 5 m depth. For both models, the sinusoidal forced vibration tests were performed with the conditions of different embedment depth, namely non-embedment, half-embedment and full-embedment. As the test results, the increase in both natural frequency and damping factor due to the embedment effects can be observed, and the soil impedances calculated from test results are discussed. (author)

  9. Modelling and operation strategies of DLR's large scale thermocline test facility (TESIS)

    Science.gov (United States)

    Odenthal, Christian; Breidenbach, Nils; Bauer, Thomas

    2017-06-01

    In this work an overview of the TESIS:store thermocline test facility and its current construction status will be given. Based on this, the TESIS:store facility using sensible solid filler material is modelled with a fully transient model, implemented in MATLAB®. Results in terms of the impact of filler site and operation strategies will be presented. While low porosity and small particle diameters for the filler material are beneficial, operation strategy is one key element with potential for optimization. It is shown that plant operators have to ponder between utilization and exergetic efficiency. Different durations of the charging and discharging period enable further potential for optimizations.

  10. Mutual coupling of hydrologic and hydrodynamic models - a viable approach for improved large-scale inundation estimates?

    Science.gov (United States)

    Hoch, Jannis; Winsemius, Hessel; van Beek, Ludovicus; Haag, Arjen; Bierkens, Marc

    2016-04-01

    Due to their increasing occurrence rate and associated economic costs, fluvial floods are large-scale and cross-border phenomena that need to be well understood. Sound information about temporal and spatial variations of flood hazard is essential for adequate flood risk management and climate change adaption measures. While progress has been made in assessments of flood hazard and risk on the global scale, studies to date have made compromises between spatial resolution on the one hand and local detail that influences their temporal characteristics (rate of rise, duration) on the other. Moreover, global models cannot realistically model flood wave propagation due to a lack of detail in channel and floodplain geometry, and the representation of hydrologic processes influencing the surface water balance such as open water evaporation from inundated water and re-infiltration of water in river banks. To overcome these restrictions and to obtain a better understanding of flood propagation including its spatio-temporal variations at the large scale, yet at a sufficiently high resolution, the present study aims to develop a large-scale modeling tool by coupling the global hydrologic model PCR-GLOBWB and the recently developed hydrodynamic model DELFT3D-FM. The first computes surface water volumes which are routed by the latter, solving the full Saint-Venant equations. With DELFT3D FM being capable of representing the model domain as a flexible mesh, model accuracy is only improved at relevant locations (river and adjacent floodplain) and the computation time is not unnecessarily increased. This efficiency is very advantageous for large-scale modelling approaches. The model domain is thereby schematized by 2D floodplains, being derived from global data sets (HydroSHEDS and G3WBM, respectively). Since a previous study with 1way-coupling showed good model performance (J.M. Hoch et al., in prep.), this approach was extended to 2way-coupling to fully represent evaporation

  11. Modeling resilience, friability, and cost of an airport affected by the large-scale disruptive event

    NARCIS (Netherlands)

    Janic, M.

    2013-01-01

    This paper deals with modeling resilience, friability, and cost of an airport affected by the largescale disruptive event. These events affecting the airport's operations individually or in combination can be bad weather, failures of particular crucial aiiport and ATC (Air Traffic Control)

  12. A balanced water layer concept for subglacial hydrology in large scale ice sheet models

    Science.gov (United States)

    Goeller, S.; Thoma, M.; Grosfeld, K.; Miller, H.

    2012-12-01

    There is currently no doubt about the existence of a wide-spread hydrological network under the Antarctic ice sheet, which lubricates the ice base and thus leads to increased ice velocities. Consequently, ice models should incorporate basal hydrology to obtain meaningful results for future ice dynamics and their contribution to global sea level rise. Here, we introduce the balanced water layer concept, covering two prominent subglacial hydrological features for ice sheet modeling on a continental scale: the evolution of subglacial lakes and balance water fluxes. We couple it to the thermomechanical ice-flow model RIMBAY and apply it to a synthetic model domain inspired by the Gamburtsev Mountains, Antarctica. In our experiments we demonstrate the dynamic generation of subglacial lakes and their impact on the velocity field of the overlaying ice sheet, resulting in a negative ice mass balance. Furthermore, we introduce an elementary parametrization of the water flux-basal sliding coupling and reveal the predominance of the ice loss through the resulting ice streams against the stabilizing influence of less hydrologically active areas. We point out, that established balance flux schemes quantify these effects only partially as their ability to store subglacial water is lacking.

  13. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist

    Science.gov (United States)

    Quresh S. Latif; Victoria A. Saab; Jonathan G. Dudley; Jeff P. Hollenbeck

    2013-01-01

    To conserve habitat for disturbance specialist species, ecologists must identify where individuals will likely settle in newly disturbed areas. Habitat suitability models can predict which sites at new disturbances will most likely attract specialists. Without validation data from newly disturbed areas, however, the best approach for maximizing predictive accuracy can...

  14. Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets

    Science.gov (United States)

    Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad

    2017-01-01

    Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…

  15. Enhancement of a model for Large-scale Airline Network Planning Problems

    NARCIS (Netherlands)

    Kölker, K.; Lopes dos Santos, Bruno F.; Lütjens, K.

    2016-01-01

    The main focus of this study is to solve the network planning problem based on passenger decision criteria including the preferred departure time and travel time for a real-sized airline network. For this purpose, a model of the integrated network planning problem is formulated including scheduling

  16. A balanced water layer concept for subglacial hydrology in large-scale ice sheet models

    Directory of Open Access Journals (Sweden)

    S. Goeller

    2013-07-01

    Full Text Available There is currently no doubt about the existence of a widespread hydrological network under the Antarctic Ice Sheet, which lubricates the ice base and thus leads to increased ice velocities. Consequently, ice models should incorporate basal hydrology to obtain meaningful results for future ice dynamics and their contribution to global sea level rise. Here, we introduce the balanced water layer concept, covering two prominent subglacial hydrological features for ice sheet modeling on a continental scale: the evolution of subglacial lakes and balance water fluxes. We couple it to the thermomechanical ice-flow model RIMBAY and apply it to a synthetic model domain. In our experiments we demonstrate the dynamic generation of subglacial lakes and their impact on the velocity field of the overlaying ice sheet, resulting in a negative ice mass balance. Furthermore, we introduce an elementary parametrization of the water flux–basal sliding coupling and reveal the predominance of the ice loss through the resulting ice streams against the stabilizing influence of less hydrologically active areas. We point out that established balance flux schemes quantify these effects only partially as their ability to store subglacial water is lacking.

  17. On large-scale shell-model calculations in sup 4 He

    Energy Technology Data Exchange (ETDEWEB)

    Bishop, R.F.; Flynn, M.F. (Manchester Univ. (UK). Inst. of Science and Technology); Bosca, M.C.; Buendia, E.; Guardiola, R. (Granada Univ. (Spain). Dept. de Fisica Moderna)

    1990-03-01

    Most shell-model calculations of {sup 4}He require very large basis spaces for the energy spectrum to stabilise. Coupled cluster methods and an exact treatment of the centre-of-mass motion dramatically reduce the number of configurations. We thereby obtain almost exact results with small bases, but which include states of very high excitation energy. (author).

  18. Time-Varying Scheme for Noncentralized Model Predictive Control of Large-Scale Systems

    Directory of Open Access Journals (Sweden)

    Alfredo Núñez

    2015-01-01

    Full Text Available The noncentralized model predictive control (NC-MPC framework in this paper refers to any distributed, hierarchical, or decentralized model predictive controller (or a combination of them the structure of which can change over time and the control actions of which are not obtained based on a centralized computation. Within this framework, we propose suitable online methods to decide which information is shared and how this information is used between the different local predictive controllers operating in a decentralized, distributed, and/or hierarchical way. Evaluating all the possible structures of the NC-MPC controller leads to a combinatorial optimization problem. Therefore, we also propose heuristic reduction methods, to keep the number of NC-MPC problems tractable to be solved. To show the benefits of the proposed framework, a case study of a set of coupled water tanks is presented.

  19. Simulation of hydrogen release and combustion in large scale geometries: models and methods

    International Nuclear Information System (INIS)

    Beccantini, A.; Dabbene, F.; Kudriakov, S.; Magnaud, J.P.; Paillere, H.; Studer, E.

    2003-01-01

    The simulation of H2 distribution and combustion in confined geometries such as nuclear reactor containments is a challenging task from the point of view of numerical simulation, as it involves quite disparate length and time scales, which need to resolved appropriately and efficiently. Cea is involved in the development and validation of codes to model such problems, for external clients such as IRSN (TONUS code), Technicatome (NAUTILUS code) or for its own safety studies. This paper provides an overview of the physical and numerical models developed for such applications, as well as some insight into the current research topics which are being pursued. Examples of H2 mixing and combustion simulations are given. (authors)

  20. Large-scale hydrological modelling in the semi-arid north-east of Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Guentner, A.

    2002-09-01

    Semi-arid areas are characterized by small water resources. An increasing water demand due to population growth and economic development as well as a possible decreasing water availability in the course of climate change may aggravate water scarcity in future in these areas. The quantitative assessment of the water resources is a prerequisite for the development of sustainable measures of water management. For this task, hydrological models within a dynamic integrated framework are indispensable tools. The main objective of this study is to develop a hydrological model for the quantification of water availability over a large geographic domain of semi-arid environments. The study area is the Federal State of Ceara in the semi-arid north-east of Brazil. Surface water from reservoirs provides the largest part of water supply. The area has recurrently been affected by droughts which caused serious economic losses and social impacts like migration from the rural regions. (orig.)

  1. Modeling of a large-scale wastewater treatment plant for efficient operation.

    Science.gov (United States)

    Gokcay, C F; Sin, G

    2004-01-01

    Environmental legislations in the Western world impose stringent effluent quality standards for ultimate protection of the environment. This is also observed in Turkey. The current paper presents efforts made to simulate an existing 0.77 million m3/day conventional activated sludge plant located at Ankara, AWTP. The ASM1 model was used for simulation in this study. The model contains numerous stoichiometric and kinetic parameters, some of which need to be determined on case by case bases. The easily degradable COD (S(S)) was determined by two methods, physical-chemical and respirometric methods, namely. The latter method was deemed unreliable and rejected in the further study. Dynamic simulation with SSSP program predicted effluent COD and MLSS values successfully while overestimating OUR. A complete fit could only be obtained by introducing a dimensionless correction factor (etaO2 = 0.58) to the oxygen term in ASM1.

  2. Modeling and Flocking Consensus Analysis for Large-Scale UAV Swarms

    Directory of Open Access Journals (Sweden)

    Li Bing

    2013-01-01

    Full Text Available Recently, distributed coordination control of the unmanned aerial vehicle (UAV swarms has been a particularly active topic in intelligent system field. In this paper, through understanding the emergent mechanism of the complex system, further research on the flocking and the dynamic characteristic of UAV swarms will be given. Firstly, this paper analyzes the current researches and existent problems of UAV swarms. Afterwards, by the theory of stochastic process and supplemented variables, a differential-integral model is established, converting the system model into Volterra integral equation. The existence and uniqueness of the solution of the system are discussed. Then the flocking control law is given based on artificial potential with system consensus. At last, we analyze the stability of the proposed flocking control algorithm based on the Lyapunov approach and prove that the system in a limited time can converge to the consensus direction of the velocity. Simulation results are provided to verify the conclusion.

  3. Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta

    Directory of Open Access Journals (Sweden)

    J. H. Nienhuis

    2017-09-01

    Full Text Available The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal dynamics or allogenic (external forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.

  4. Modelling soil carbon movement by erosion over large scales and long time periods

    Science.gov (United States)

    Quinton, John; Davies, Jessica; Tipping, Ed

    2014-05-01

    Agricultural intensification accelerates physical erosion rates and the transport of carbon within the landscape. In order to improve understanding of how past, present and future anthropogenic land-use change has and will influence carbon and nutrient cycling, it is necessary to develop quantitative tools that can predict soil erosion and carbon movement at large temporal and spatial scales, that are consistent with the time constants of biogeochemical processes and the spatial scales of land-use change and natural resources. However, representing erosion and its impact on the carbon cycle over large spatial scales and long time periods is challenging. Erosion and sediment transport processes operate at multiple spatial and temporal scales with splash erosion dominating at the sub-plot scale and occurring within seconds, up to gully formation operating at field-catchment scales over days to months. In addition, most erosion production observations are made at the experimental plot scale, where fine time scales and detailed processes dominate. This is coupled with complexities associated with carbon detachment, decomposition and uncertainties surrounding carbon burial rates and stability - all of which occur over widely different temporal and spatial scales. As such, these data cannot be simply scaled to inform erosion and carbon representation at the regional scale, where topography, vegetation cover and landscape organisation become more important controls on sediment fluxes. We have developed a simple energy-based regional scale method of soil erosion modelling, which is integration into a hydro-biogeochemical model that will simulate carbon, nitrogen and phosphorus pools and fluxes across the UK from the industrial revolution to the present day. The model is driven by overland flow, dynamic vegetation cover, soil properties, and topographic distributions and produces sediment production and yield at the 5km grid scale. In this paper we will introduce the

  5. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel

    OpenAIRE

    Yuan, Liming; Smith, Alex C.

    2015-01-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spr...

  6. Large-scale multimodal transport modelling. Part 2: Implementation and validation

    CSIR Research Space (South Africa)

    Van Heerden, Q

    2013-07-01

    Full Text Available activity chains across a 24-hour period. That is, we do not model only the morning or afternoon peak, and activity types include home, work, education, shopping, leisure and other activities. Providing a network is the second requirement. The network.... The typical duration for an activity type is then the median duration within that quantile. The same approach was followed for the three home activity types, work, tertiary education, leisure, and the major and minor activities of commercial vehicles...

  7. Estimating extinction risk with metapopulation models of large-scale fragmentation.

    Science.gov (United States)

    Schnell, Jessica K; Harris, Grant M; Pimm, Stuart L; Russell, Gareth J

    2013-06-01

    Habitat loss is the principal threat to species. How much habitat remains-and how quickly it is shrinking-are implicitly included in the way the International Union for Conservation of Nature determines a species' risk of extinction. Many endangered species have habitats that are also fragmented to different extents. Thus, ideally, fragmentation should be quantified in a standard way in risk assessments. Although mapping fragmentation from satellite imagery is easy, efficient techniques for relating maps of remaining habitat to extinction risk are few. Purely spatial metrics from landscape ecology are hard to interpret and do not address extinction directly. Spatially explicit metapopulation models link fragmentation to extinction risk, but standard models work only at small scales. Counterintuitively, these models predict that a species in a large, contiguous habitat will fare worse than one in 2 tiny patches. This occurs because although the species in the large, contiguous habitat has a low probability of extinction, recolonization cannot occur if there are no other patches to provide colonists for a rescue effect. For 4 ecologically comparable bird species of the North Central American highland forests, we devised metapopulation models with area-weighted self-colonization terms; this reflected repopulation of a patch from a remnant of individuals that survived an adverse event. Use of this term gives extra weight to a patch in its own rescue effect. Species assigned least risk status were comparable in long-term extinction risk with those ranked as threatened. This finding suggests that fragmentation has had a substantial negative effect on them that is not accounted for in their Red List category. © 2013 Society for Conservation Biology.

  8. Burnout of pulverized biomass particles in large scale boiler – Single particle model approach

    DEFF Research Database (Denmark)

    Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero

    2010-01-01

    the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner...... location and the trajectories of the particles might be optimised to maximise the residence time and burnout....

  9. Delineating large-scale migratory connectivity of reed warblers using integrated multistate models

    Czech Academy of Sciences Publication Activity Database

    Procházka, Petr; Hahn, S.; Rolland, S.; van der Jeugd, H.; Csörgő, T.; Jiguet, F.; Mokwa, T.; Liechti, F.; Vangeluwe, D.; Korner-Nievergelt, F.

    2017-01-01

    Roč. 23, č. 1 (2017), s. 27-40 ISSN 1366-9516 R&D Projects: GA ČR GA13-06451S Institutional support: RVO:68081766 Keywords : Acrocephalus scirpaceus * band encounter data * bird migration * loop migration * migratory connectivity * ring recovery data * ring recovery model * species distribution * survival Subject RIV: EG - Zoology OBOR OECD: Ecology Impact factor: 4.391, year: 2016

  10. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I. (University of California, San Diego); Winey, J. Michael (Washington State University); Gupta, Yogendra Mohan (Washington State University); Lane, J. Matthew D.; Ditmire, Todd (University of Texas at Austin); Quevedo, Hernan J. (University of Texas at Austin)

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  11. A coordination model for ultra-large scale systems of systems

    Directory of Open Access Journals (Sweden)

    Manuela L. Bujorianu

    2013-11-01

    Full Text Available The ultra large multi-agent systems are becoming increasingly popular due to quick decay of the individual production costs and the potential of speeding up the solving of complex problems. Examples include nano-robots, or systems of nano-satellites for dangerous meteorite detection, or cultures of stem cells for organ regeneration or nerve repair. The topics associated with these systems are usually dealt within the theories of intelligent swarms or biologically inspired computation systems. Stochastic models play an important role and they are based on various formulations of the mechanical statistics. In these cases, the main assumption is that the swarm elements have a simple behaviour and that some average properties can be deduced for the entire swarm. In contrast, complex systems in areas like aeronautics are formed by elements with sophisticated behaviour, which are even autonomous. In situations like this, a new approach to swarm coordination is necessary. We present a stochastic model where the swarm elements are communicating autonomous systems, the coordination is separated from the component autonomous activity and the entire swarm can be abstracted away as a piecewise deterministic Markov process, which constitutes one of the most popular model in stochastic control. Keywords: ultra large multi-agent systems, system of systems, autonomous systems, stochastic hybrid systems.

  12. Model-based analysis of effects from large-scale wind power production

    International Nuclear Information System (INIS)

    Rosen, Johannes; Tietze-Stoeckinger, Ingela; Rentz, Otto

    2007-01-01

    For an economically and ecologically optimised integration of fluctuating renewable power generation (especially wind power) into electricity generation, a detailed consideration of fluctuation-induced effects on the existing power system is essential. A model-based approach is introduced in this paper, which comprehensively analyses the impact of such effects on power plant scheduling and facilitates their integration into the development of strategies for an optimised evolution of the future power system structure. The newly developed AEOLIUS tool for the simulation of power plant scheduling is described. In a combined analysis of long- and short-term effects it is used together with the multi-periodic cost-optimising energy system model PERSEUS-CERT. Based on the MATLAB/Simulink[reg] package, AEOLIUS considers the challenges for plant scheduling down to a time scale of 10 min. Special attention is paid to the provision of stand-by capacities and control power, as well as intermediate storage. Thus, a sophisticated quantification of the actual (net) benefits of wind power feed-in is achieved. Model results for Germany show that wind mainly substitutes power from intermediate-load and base-load plants (coal-, lignite-, and nuclear-fired). However, the required provision of stand-by capacities and control power does not only limit the substitution of conventional capacities, but also the achievable net savings of fuel and emissions in conventional power generation

  13. User Friendly Open GIS Tool for Large Scale Data Assimilation - a Case Study of Hydrological Modelling

    Science.gov (United States)

    Gupta, P. K.

    2012-08-01

    Open source software (OSS) coding has tremendous advantages over proprietary software. These are primarily fuelled by high level programming languages (JAVA, C++, Python etc...) and open source geospatial libraries (GDAL/OGR, GEOS, GeoTools etc.). Quantum GIS (QGIS) is a popular open source GIS package, which is licensed under GNU GPL and is written in C++. It allows users to perform specialised tasks by creating plugins in C++ and Python. This research article emphasises on exploiting this capability of QGIS to build and implement plugins across multiple platforms using the easy to learn - Python programming language. In the present study, a tool has been developed to assimilate large spatio-temporal datasets such as national level gridded rainfall, temperature, topographic (digital elevation model, slope, aspect), landuse/landcover and multi-layer soil data for input into hydrological models. At present this tool has been developed for Indian sub-continent. An attempt is also made to use popular scientific and numerical libraries to create custom applications for digital inclusion. In the hydrological modelling calibration and validation are important steps which are repetitively carried out for the same study region. As such the developed tool will be user friendly and used efficiently for these repetitive processes by reducing the time required for data management and handling. Moreover, it was found that the developed tool can easily assimilate large dataset in an organised manner.

  14. Modeling large-scale human alteration of land surface hydrology and climate

    Science.gov (United States)

    Pokhrel, Yadu N.; Felfelani, Farshid; Shin, Sanghoon; Yamada, Tomohito J.; Satoh, Yusuke

    2017-12-01

    Rapidly expanding human activities have profoundly affected various biophysical and biogeochemical processes of the Earth system over a broad range of scales, and freshwater systems are now amongst the most extensively altered ecosystems. In this study, we examine the human-induced changes in land surface water and energy balances and the associated climate impacts using a coupled hydrological-climate model framework which also simulates the impacts of human activities on the water cycle. We present three sets of analyses using the results from two model versions—one with and the other without considering human activities; both versions are run in offline and coupled mode resulting in a series of four experiments in total. First, we examine climate and human-induced changes in regional water balance focusing on the widely debated issue of the desiccation of the Aral Sea in central Asia. Then, we discuss the changes in surface temperature as a result of changes in land surface energy balance due to irrigation over global and regional scales. Finally, we examine the global and regional climate impacts of increased atmospheric water vapor content due to irrigation. Results indicate that the direct anthropogenic alteration of river flow in the Aral Sea basin resulted in the loss of 510 km3 of water during the latter half of the twentieth century which explains about half of the total loss of water from the sea. Results of irrigation-induced changes in surface energy balance suggest a significant surface cooling of up to 3.3 K over 1° grids in highly irrigated areas but a negligible change in land surface temperature when averaged over sufficiently large global regions. Results from the coupled model indicate a substantial change in 2 m air temperature and outgoing longwave radiation due to irrigation, highlighting the non-local (regional and global) implications of irrigation. These results provide important insights on the direct human alteration of land surface

  15. A Poisson regression approach to model monthly hail occurrence in Northern Switzerland using large-scale environmental variables

    Science.gov (United States)

    Madonna, Erica; Ginsbourger, David; Martius, Olivia

    2018-05-01

    In Switzerland, hail regularly causes substantial damage to agriculture, cars and infrastructure, however, little is known about its long-term variability. To study the variability, the monthly number of days with hail in northern Switzerland is modeled in a regression framework using large-scale predictors derived from ERA-Interim reanalysis. The model is developed and verified using radar-based hail observations for the extended summer season (April-September) in the period 2002-2014. The seasonality of hail is explicitly modeled with a categorical predictor (month) and monthly anomalies of several large-scale predictors are used to capture the year-to-year variability. Several regression models are applied and their performance tested with respect to standard scores and cross-validation. The chosen model includes four predictors: the monthly anomaly of the two meter temperature, the monthly anomaly of the logarithm of the convective available potential energy (CAPE), the monthly anomaly of the wind shear and the month. This model well captures the intra-annual variability and slightly underestimates its inter-annual variability. The regression model is applied to the reanalysis data back in time to 1980. The resulting hail day time series shows an increase of the number of hail days per month, which is (in the model) related to an increase in temperature and CAPE. The trend corresponds to approximately 0.5 days per month per decade. The results of the regression model have been compared to two independent data sets. All data sets agree on the sign of the trend, but the trend is weaker in the other data sets.

  16. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  17. A Nonlinear Multiobjective Bilevel Model for Minimum Cost Network Flow Problem in a Large-Scale Construction Project

    Directory of Open Access Journals (Sweden)

    Jiuping Xu

    2012-01-01

    Full Text Available The aim of this study is to deal with a minimum cost network flow problem (MCNFP in a large-scale construction project using a nonlinear multiobjective bilevel model with birandom variables. The main target of the upper level is to minimize both direct and transportation time costs. The target of the lower level is to minimize transportation costs. After an analysis of the birandom variables, an expectation multiobjective bilevel programming model with chance constraints is formulated to incorporate decision makers’ preferences. To solve the identified special conditions, an equivalent crisp model is proposed with an additional multiobjective bilevel particle swarm optimization (MOBLPSO developed to solve the model. The Shuibuya Hydropower Project is used as a real-world example to verify the proposed approach. Results and analysis are presented to highlight the performances of the MOBLPSO, which is very effective and efficient compared to a genetic algorithm and a simulated annealing algorithm.

  18. Constructing Model of Relationship among Behaviors and Injuries to Products Based on Large Scale Text Data on Injuries

    Science.gov (United States)

    Nomori, Koji; Kitamura, Koji; Motomura, Yoichi; Nishida, Yoshifumi; Yamanaka, Tatsuhiro; Komatsubara, Akinori

    In Japan, childhood injury prevention is urgent issue. Safety measures through creating knowledge of injury data are essential for preventing childhood injuries. Especially the injury prevention approach by product modification is very important. The risk assessment is one of the most fundamental methods to design safety products. The conventional risk assessment has been carried out subjectively because product makers have poor data on injuries. This paper deals with evidence-based risk assessment, in which artificial intelligence technologies are strongly needed. This paper describes a new method of foreseeing usage of products, which is the first step of the evidence-based risk assessment, and presents a retrieval system of injury data. The system enables a product designer to foresee how children use a product and which types of injuries occur due to the product in daily environment. The developed system consists of large scale injury data, text mining technology and probabilistic modeling technology. Large scale text data on childhood injuries was collected from medical institutions by an injury surveillance system. Types of behaviors to a product were derived from the injury text data using text mining technology. The relationship among products, types of behaviors, types of injuries and characteristics of children was modeled by Bayesian Network. The fundamental functions of the developed system and examples of new findings obtained by the system are reported in this paper.

  19. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    Science.gov (United States)

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  20. Large-scale assessment of the zebrafish embryo as a possible predictive model in toxicity testing.

    Directory of Open Access Journals (Sweden)

    Shaukat Ali

    Full Text Available BACKGROUND: In the drug discovery pipeline, safety pharmacology is a major issue. The zebrafish has been proposed as a model that can bridge the gap in this field between cell assays (which are cost-effective, but low in data content and rodent assays (which are high in data content, but less cost-efficient. However, zebrafish assays are only likely to be useful if they can be shown to have high predictive power. We examined this issue by assaying 60 water-soluble compounds representing a range of chemical classes and toxicological mechanisms. METHODOLOGY/PRINCIPAL FINDINGS: Over 20,000 wild-type zebrafish embryos (including controls were cultured individually in defined buffer in 96-well plates. Embryos were exposed for a 96 hour period starting at 24 hours post fertilization. A logarithmic concentration series was used for range-finding, followed by a narrower geometric series for LC(50 determination. Zebrafish embryo LC(50 (log mmol/L, and published data on rodent LD(50 (log mmol/kg, were found to be strongly correlated (using Kendall's rank correlation tau and Pearson's product-moment correlation. The slope of the regression line for the full set of compounds was 0.73403. However, we found that the slope was strongly influenced by compound class. Thus, while most compounds had a similar toxicity level in both species, some compounds were markedly more toxic in zebrafish than in rodents, or vice versa. CONCLUSIONS: For the substances examined here, in aggregate, the zebrafish embryo model has good predictivity for toxicity in rodents. However, the correlation between zebrafish and rodent toxicity varies considerably between individual compounds and compound class. We discuss the strengths and limitations of the zebrafish model in light of these findings.

  1. Modelling large-scale ice-sheet–climate interactions following glacial inception

    Directory of Open Access Journals (Sweden)

    J. M. Gregory

    2012-10-01

    Full Text Available We have coupled the FAMOUS global AOGCM (atmosphere-ocean general circulation model to the Glimmer thermomechanical ice-sheet model in order to study the development of ice-sheets in north-east America (Laurentia and north-west Europe (Fennoscandia following glacial inception. This first use of a coupled AOGCM–ice-sheet model for a study of change on long palæoclimate timescales is made possible by the low computational cost of FAMOUS, despite its inclusion of physical parameterisations similar in complexity to higher-resolution AOGCMs. With the orbital forcing of 115 ka BP, FAMOUS–Glimmer produces ice caps on the Canadian Arctic islands, on the north-west coast of Hudson Bay and in southern Scandinavia, which grow to occupy the Keewatin region of the Canadian mainland and all of Fennoscandia over 50 ka. Their growth is eventually halted by increasing coastal ice discharge. The expansion of the ice-sheets influences the regional climate, which becomes cooler, reducing the ablation, and ice accumulates in places that initially do not have positive surface mass balance. The results suggest the possibility that the glaciation of north-east America could have begun on the Canadian Arctic islands, producing a regional climate change that caused or enhanced the growth of ice on the mainland. The increase in albedo (due to snow and ice cover is the dominant feedback on the area of the ice-sheets and acts rapidly, whereas the feedback of topography on SMB does not become significant for several centuries, but eventually has a large effect on the thickening of the ice-sheets. These two positive feedbacks are mutually reinforcing. In addition, the change in topography perturbs the tropospheric circulation, producing some reduction of cloud, and mitigating the local cooling along the margin of the Laurentide ice-sheet. Our experiments demonstrate the importance and complexity of the interactions between ice-sheets and local climate.

  2. Modelling the large-scale yellow fever outbreak in Luanda, Angola, and the impact of vaccination.

    Science.gov (United States)

    Zhao, Shi; Stone, Lewi; Gao, Daozhou; He, Daihai

    2018-01-01

    Yellow fever (YF), transmitted via bites of infected mosquitoes, is a life-threatening viral disease endemic to tropical and subtropical regions of Africa and South America. YF has largely been controlled by widespread national vaccination campaigns. Nevertheless, between December 2015 and August 2016, YF resurged in Angola, quickly spread and became the largest YF outbreak for the last 30 years. Recently, YF resurged again in Brazil (December 2016). Thus, there is an urgent need to gain better understanding of the transmission pattern of YF. The present study provides a refined mathematical model, combined with modern likelihood-based statistical inference techniques, to assess and reconstruct important epidemiological processes underlying Angola's YF outbreak. This includes the outbreak's attack rate, the reproduction number ([Formula: see text]), the role of the mosquito vector, the influence of climatic factors, and the unusual but noticeable appearance of two-waves in the YF outbreak. The model explores actual and hypothetical vaccination strategies, and the impacts of possible human reactive behaviors (e.g., response to media precautions). While there were 73 deaths reported over the study period, the model indicates that the vaccination campaign saved 5.1-fold more people from death and saved from illness 5.6-fold of the observed 941 cases. Delaying the availability of the vaccines further would have greatly worsened the epidemic in terms of increased cases and deaths. The analysis estimated a mean [Formula: see text] and an attack rate of 0.09-0.15% (proportion of population infected) over the whole period from December 2015 to August 2016. Our estimated lower and upper bounds of [Formula: see text] are in line with previous studies. Unusually, [Formula: see text] oscillated in a manner that was "delayed" with the reported deaths. High recent number of deaths were associated (followed) with periods of relatively low disease transmission and low [Formula

  3. Modelling the large-scale yellow fever outbreak in Luanda, Angola, and the impact of vaccination.

    Directory of Open Access Journals (Sweden)

    Shi Zhao

    2018-01-01

    Full Text Available Yellow fever (YF, transmitted via bites of infected mosquitoes, is a life-threatening viral disease endemic to tropical and subtropical regions of Africa and South America. YF has largely been controlled by widespread national vaccination campaigns. Nevertheless, between December 2015 and August 2016, YF resurged in Angola, quickly spread and became the largest YF outbreak for the last 30 years. Recently, YF resurged again in Brazil (December 2016. Thus, there is an urgent need to gain better understanding of the transmission pattern of YF.The present study provides a refined mathematical model, combined with modern likelihood-based statistical inference techniques, to assess and reconstruct important epidemiological processes underlying Angola's YF outbreak. This includes the outbreak's attack rate, the reproduction number ([Formula: see text], the role of the mosquito vector, the influence of climatic factors, and the unusual but noticeable appearance of two-waves in the YF outbreak. The model explores actual and hypothetical vaccination strategies, and the impacts of possible human reactive behaviors (e.g., response to media precautions.While there were 73 deaths reported over the study period, the model indicates that the vaccination campaign saved 5.1-fold more people from death and saved from illness 5.6-fold of the observed 941 cases. Delaying the availability of the vaccines further would have greatly worsened the epidemic in terms of increased cases and deaths. The analysis estimated a mean [Formula: see text] and an attack rate of 0.09-0.15% (proportion of population infected over the whole period from December 2015 to August 2016. Our estimated lower and upper bounds of [Formula: see text] are in line with previous studies. Unusually, [Formula: see text] oscillated in a manner that was "delayed" with the reported deaths. High recent number of deaths were associated (followed with periods of relatively low disease transmission and low

  4. A large-scale linear complementarity model of the North American natural gas market

    International Nuclear Information System (INIS)

    Gabriel, Steven A.; Jifang Zhuang; Kiet, Supat

    2005-01-01

    The North American natural gas market has seen significant changes recently due to deregulation and restructuring. For example, third party marketers can contract for transportation and purchase of gas to sell to end-users. While the intent was a more competitive market, the potential for market power exists. We analyze this market using a linear complementarity equilibrium model including producers, storage and peak gas operators, third party marketers and four end-use sectors. The marketers are depicted as Nash-Cournot players determining supply to meet end-use consumption, all other players are in perfect competition. Results based on National Petroleum Council scenarios are presented. (Author)

  5. laGP: Large-Scale Spatial Modeling via Local Approximate Gaussian Processes in R

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2016-08-01

    Full Text Available Gaussian process (GP regression models make for powerful predictors in out of sample exercises, but cubic runtimes for dense matrix decompositions severely limit the size of data - training and testing - on which they can be deployed. That means that in computer experiment, spatial/geo-physical, and machine learning contexts, GPs no longer enjoy privileged status as data sets continue to balloon in size. We discuss an implementation of local approximate Gaussian process models, in the laGP package for R, that offers a particular sparse-matrix remedy uniquely positioned to leverage modern parallel computing architectures. The laGP approach can be seen as an update on the spatial statistical method of local kriging neighborhoods. We briefly review the method, and provide extensive illustrations of the features in the package through worked-code examples. The appendix covers custom building options for symmetric multi-processor and graphical processing units, and built-in wrapper routines that automate distribution over a simple network of workstations.

  6. Regularized estimation of large-scale gene association networks using graphical Gaussian models.

    Science.gov (United States)

    Krämer, Nicole; Schäfer, Juliane; Boulesteix, Anne-Laure

    2009-11-24

    Graphical Gaussian models are popular tools for the estimation of (undirected) gene association networks from microarray data. A key issue when the number of variables greatly exceeds the number of samples is the estimation of the matrix of partial correlations. Since the (Moore-Penrose) inverse of the sample covariance matrix leads to poor estimates in this scenario, standard methods are inappropriate and adequate regularization techniques are needed. Popular approaches include biased estimates of the covariance matrix and high-dimensional regression schemes, such as the Lasso and Partial Least Squares. In this article, we investigate a general framework for combining regularized regression methods with the estimation of Graphical Gaussian models. This framework includes various existing methods as well as two new approaches based on ridge regression and adaptive lasso, respectively. These methods are extensively compared both qualitatively and quantitatively within a simulation study and through an application to six diverse real data sets. In addition, all proposed algorithms are implemented in the R package "parcor", available from the R repository CRAN. In our simulation studies, the investigated non-sparse regression methods, i.e. Ridge Regression and Partial Least Squares, exhibit rather conservative behavior when combined with (local) false discovery rate multiple testing in order to decide whether or not an edge is present in the network. For networks with higher densities, the difference in performance of the methods decreases. For sparse networks, we confirm the Lasso's well known tendency towards selecting too many edges, whereas the two-stage adaptive Lasso is an interesting alternative that provides sparser solutions. In our simulations, both sparse and non-sparse methods are able to reconstruct networks with cluster structures. On six real data sets, we also clearly distinguish the results obtained using the non-sparse methods and those obtained

  7. Regularized estimation of large-scale gene association networks using graphical Gaussian models

    Directory of Open Access Journals (Sweden)

    Schäfer Juliane

    2009-11-01

    Full Text Available Abstract Background Graphical Gaussian models are popular tools for the estimation of (undirected gene association networks from microarray data. A key issue when the number of variables greatly exceeds the number of samples is the estimation of the matrix of partial correlations. Since the (Moore-Penrose inverse of the sample covariance matrix leads to poor estimates in this scenario, standard methods are inappropriate and adequate regularization techniques are needed. Popular approaches include biased estimates of the covariance matrix and high-dimensional regression schemes, such as the Lasso and Partial Least Squares. Results In this article, we investigate a general framework for combining regularized regression methods with the estimation of Graphical Gaussian models. This framework includes various existing methods as well as two new approaches based on ridge regression and adaptive lasso, respectively. These methods are extensively compared both qualitatively and quantitatively within a simulation study and through an application to six diverse real data sets. In addition, all proposed algorithms are implemented in the R package "parcor", available from the R repository CRAN. Conclusion In our simulation studies, the investigated non-sparse regression methods, i.e. Ridge Regression and Partial Least Squares, exhibit rather conservative behavior when combined with (local false discovery rate multiple testing in order to decide whether or not an edge is present in the network. For networks with higher densities, the difference in performance of the methods decreases. For sparse networks, we confirm the Lasso's well known tendency towards selecting too many edges, whereas the two-stage adaptive Lasso is an interesting alternative that provides sparser solutions. In our simulations, both sparse and non-sparse methods are able to reconstruct networks with cluster structures. On six real data sets, we also clearly distinguish the results

  8. Nonlinear Model-Based Predictive Control applied to Large Scale Cryogenic Facilities

    CERN Document Server

    Blanco Vinuela, Enrique; de Prada Moraga, Cesar

    2001-01-01

    The thesis addresses the study, analysis, development, and finally the real implementation of an advanced control system for the 1.8 K Cooling Loop of the LHC (Large Hadron Collider) accelerator. The LHC is the next accelerator being built at CERN (European Center for Nuclear Research), it will use superconducting magnets operating below a temperature of 1.9 K along a circumference of 27 kilometers. The temperature of these magnets is a control parameter with strict operating constraints. The first control implementations applied a procedure that included linear identification, modelling and regulation using a linear predictive controller. It did improve largely the overall performance of the plant with respect to a classical PID regulator, but the nature of the cryogenic processes pointed out the need of a more adequate technique, such as a nonlinear methodology. This thesis is a first step to develop a global regulation strategy for the overall control of the LHC cells when they will operate simultaneously....

  9. The Large-Scale Debris Avalanche From The Tancitaro Volcano (Mexico): Characterization And Modeling

    Science.gov (United States)

    Morelli, S.; Gigli, G.; Falorni, G.; Garduno Monroy, V. H.; Arreygue, E.

    2008-12-01

    until they disappear entirely in the most distal reaches. The granulometric analysis and the comparison between the debris avalanche of the Tancitaro and other collapses with similar morphometric features (vertical relief during runout, travel distance, volume and area of the deposit) indicate that the collapse was most likely not primed by any type of eruption, but rather triggered by a strong seismic shock that could have induced the failure of a portion of the edifice, already deeply altered by intense hydrothermal fluid circulation. It is also possible to hypothesize that mechanical fluidization may have been the mechanism controlling the long runout of the avalanche, as has been determined for other well-known events. The behavior of the Tancitaro debris avalanche was numerically modeled using the DAN-W code. By opportunely modifying the rheological parameters of the different models selectable within DAN, it was determined that the two-parameter 'Voellmy model' provides the best approximation of the avalanche movement. The Voellmy model produces the most realistic results in terms of runout distance, velocity and spatial distribution of the failed mass. Since the Tancitaro event was not witnessed directly, it is possible to infer approximate velocities only from comparisons with similar and documented events, namely the Mt. St. Helens debris avalanche occurred on May 18, 1980.

  10. A model for large-scale, interprofessional, compulsory cross-cultural education with an indigenous focus.

    Science.gov (United States)

    Kickett, Marion; Hoffman, Julie; Flavell, Helen

    2014-01-01

    Cultural competency training for health professionals is now a recognised strategy to address health disparities between minority and white populations in Western nations. In Australia, urgent action is required to "Close the Gap" between the health outcomes of Indigenous Australians and the dominant European population, and significantly, cultural competency development for health professionals has been identified as an important element to providing culturally safe care. This paper describes a compulsory interprofessional first-year unit in a large health sciences faculty in Australia, which aims to begin students on their journey to becoming culturally competent health professionals. Reporting primarily on qualitative student feedback from the unit's first year of implementation as well as the structure, learning objects, assessment, and approach to coordinating the unit, this paper provides a model for implementing quality wide-scale, interprofessional cultural competence education within a postcolonial context. Critical factors for the unit's implementation and ongoing success are also discussed.

  11. Reduced Order Modeling for Prediction and Control of Large-Scale Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Kalashnikova, Irina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computational Mathematics; Arunajatesan, Srinivasan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Aerosciences Dept.; Barone, Matthew Franklin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Aerosciences Dept.; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Uncertainty Quantification and Optimization Dept.; Fike, Jeffrey A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Component Science and Mechanics Dept.

    2014-05-01

    This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier

  12. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gene Golub; Kwok Ko

    2009-03-30

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  13. Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents

    Science.gov (United States)

    Madankan, Reza

    All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions

  14. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    Science.gov (United States)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  15. Pangolin v1.0, a conservative 2-D advection model towards large-scale parallel calculation

    Directory of Open Access Journals (Sweden)

    A. Praga

    2015-02-01

    Full Text Available To exploit the possibilities of parallel computers, we designed a large-scale bidimensional atmospheric advection model named Pangolin. As the basis for a future chemistry-transport model, a finite-volume approach for advection was chosen to ensure mass preservation and to ease parallelization. To overcome the pole restriction on time steps for a regular latitude–longitude grid, Pangolin uses a quasi-area-preserving reduced latitude–longitude grid. The features of the regular grid are exploited to reduce the memory footprint and enable effective parallel performances. In addition, a custom domain decomposition algorithm is presented. To assess the validity of the advection scheme, its results are compared with state-of-the-art models on algebraic test cases. Finally, parallel performances are shown in terms of strong scaling and confirm the efficient scalability up to a few hundred cores.

  16. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... based on measurements on the Marstal plant, Denmark, and through comparison with published and unpublished data from other plants. Evaluations on the thermal, economical and environmental performance are repored, based on experiences from the last decade. For detailed designing, a computer simulation...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors...

  17. Large scale kinematics and dynamical modelling of the Milky Way nuclear star cluster

    Science.gov (United States)

    Feldmeier, A.; Neumayer, N.; Seth, A.; Schödel, R.; Lützgendorf, N.; de Zeeuw, P. T.; Kissler-Patig, M.; Nishiyama, S.; Walcher, C. J.

    2014-10-01

    Context. Within the central 10 pc of our Galaxy lies a dense cluster of stars. This nuclear star cluster forms a distinct component of the Galaxy, and similar nuclear star clusters are found in most nearby spiral and elliptical galaxies. Studying the structure and kinematics of nuclear star clusters reveals the history of mass accretion and growth of galaxy nuclei and central massive black holes. Aims: Because the Milky Way nuclear star cluster is at a distance of only 8 kpc, we can spatially resolve the cluster on sub-parsec scales. This makes the Milky Way nuclear star cluster a reference object for understanding the formation of all nuclear star clusters. Methods: We have used the near-infrared long-slit spectrograph ISAAC (VLT) in a drift-scan to construct an integral-field spectroscopic map of the central ˜9.5 × 8 pc of our Galaxy, and six smaller fields out to 19 pc along the Galactic plane. We use this spectroscopic data set to extract stellar kinematics both of individual stars and from the unresolved integrated light spectrum. We present a velocity and dispersion map from the integrated light spectra and model these kinematics using kinemetry and axisymmetric Jeans models. We also measure radial velocities and CO bandhead strengths of 1375 spectra from individual stars. Results: We find kinematic complexity in the nuclear star clusters radial velocity map including a misalignment of the kinematic position angle by 9◦ counterclockwise relative to the Galactic plane, and indications for a rotating substructure perpendicular to the Galactic plane at a radius of 20'' or ˜0.8 pc. We determine the mass of the nuclear star cluster within r = 4.2 pc to (1.4+0.6-0.7) × 107 M⊙. We also show that our kinematic data results in a significant underestimation of the supermassive black hole (SMBH) mass. Conclusions: The kinematic substructure and position angle misalignment may hint at distinct accretion events. This indicates that the Milky Way nuclear star

  18. The benefits of using remotely sensed soil moisture in parameter identification of large-scale hydrological models

    Science.gov (United States)

    Wanders, N.; Bierkens, M. F. P.; de Jong, S. M.; de Roo, A.; Karssenberg, D.

    2014-08-01

    Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system, in particular the unsaturated zone, remains uncalibrated. Soil moisture observations from satellites have the potential to fill this gap. Here we evaluate the added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: (1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? (2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to calibration based only on discharge observations, such that this leads to improved simulations of soil moisture content and discharge? A dual state and parameter Ensemble Kalman Filter is used to calibrate the hydrological model LISFLOOD for the Upper Danube. Calibration is done using discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS, and ASCAT. Calibration with discharge data improves the estimation of groundwater and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate identification of parameters related to land-surface processes. For the Upper Danube upstream area up to 40,000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30% in the RMSE for discharge simulations, compared to calibration on discharge alone. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models, leading to a better simulation of soil moisture content throughout the catchment and a better simulation of discharge in upstream areas. This article was corrected on 15 SEP 2014. See the end of the full text for details.

  19. Aerodynamic characteristics of a large-scale model with a swept wing and a jet flap having an expandable duct

    Science.gov (United States)

    Aiken, T. N.; Aoyagi, K.; Falarski, M. D.

    1973-01-01

    The data from an investigation of the aerodynamic characteristics of the expandable duct-jet flap concept are presented. The investigation was made using a large-scale model in the Ames 40- by 80-foot Wind Tunnel. The expandable duct-jet flap concept uses a lower surface, split flap and an upper surface, Fowler flap to form an internal, variable area cavity for the blowing air. Small amounts of blowing are used on the knee of the upper surface flap and the knee of a short-chord, trailing edge control flap. The bulk of the blowing is at the trailing edge. The flap could extend the full span of the model wing or over the inboard part only, with blown ailerons outboard. Primary configurations tested were two flap angles, typical of takeoff and landing; symmetric control flap deflections, primarily for improved landing performance; and asymmetric aileron and control flap deflections, for lateral control.

  20. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    Science.gov (United States)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  1. Large scale composting model

    OpenAIRE

    Henon , Florent; Debenest , Gérald; Tremier , Anne; Quintard , Michel; Martel , Jean-Luc; Duchalais , Guy

    2012-01-01

    International audience; One way to treat the organic wastes accordingly to the environmental policies is to develop biological treatment like composting. Nevertheless, this development largely relies on the quality of the final product and as a consequence on the quality of the biological activity during the treatment. Favourable conditions (oxygen concentration, temperature and moisture content) in the waste bed largely contribute to the establishment of a good aerobic biological activity an...

  2. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    Science.gov (United States)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick

    2014-01-01

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  3. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Drohmann, Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Tuminaro, Raymond S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Computational Mathematics; Boggs, Paul T. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Optimization and Uncertainty Estimation

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model

  4. Modelling bark beetle disturbances in a large scale forest scenario model to assess climate change impacts and evaluate adaptive management strategies

    NARCIS (Netherlands)

    Seidl, R.; Schelhaas, M.J.; Lindner, M.; Lexer, M.J.

    2009-01-01

    To study potential consequences of climate-induced changes in the biotic disturbance regime at regional to national scale we integrated a model of Ips typographus (L. Scol. Col.) damages into the large-scale forest scenario model EFISCEN. A two-stage multivariate statistical meta-model was used to

  5. Long-term modelling of Carbon Capture and Storage, Nuclear Fusion, and large-scale District Heating

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik; Korsholm, Søren Bang; Lüthje, Mikael

    2011-01-01

    Among the technologies for mitigating greenhouse gasses, carbon capture and storage (CCS) and nuclear fusion are interesting in the long term. In several studies with time horizon 2050 CCS has been identified as an important technology, while nuclear fusion cannot become commercially available...... on nuclear fusion and the Pan European TIMES model, respectively. In the next decades CCS can be a driver for the development and expansion of large-scale district heating systems, which are currently widespread in Europe, Korea and China, and with large potentials in North America. If fusion will replace...... fossil fuel power plants with CCS in the second half of the century, the same infrastructure for heat distribution can be used which will support the penetration of both technologies. This paper will address the issue of infrastructure development and the use of CCS and fusion technologies using...

  6. Co-evolution of intelligent socio-technical systems modelling and applications in large scale emergency and transport domains

    CERN Document Server

    2013-01-01

    As the interconnectivity between humans through technical devices is becoming ubiquitous, the next step is already in the making: ambient intelligence, i.e. smart (technical) environments, which will eventually play the same active role in communication as the human players, leading to a co-evolution in all domains where real-time communication is essential. This topical volume, based on the findings of the Socionical European research project, gives equal attention to two highly relevant domains of applications: transport, specifically traffic, dynamics from the viewpoint of a socio-technical interaction and evacuation scenarios for large-scale emergency situations. Care was taken to investigate as much as possible the limits of scalability and to combine the modeling using complex systems science approaches with relevant data analysis.

  7. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  8. Parameterization of ice- and water clouds and their radiation-transport properties for large-scale atmospheric models

    International Nuclear Information System (INIS)

    Rockel, B.

    1988-01-01

    A model of cloud and radiation transport for large-scale atmospheric models is introduced, which besides the water phase also takes the ice phase into account. The cloud model can diagnostically determine the degree of cloud cover, liquid water and ice content by the parameters of state given by the atmospheric model. It consists of four submodels for non-convective and convective cloudiness, boundary layer clouds and ice clouds. An existing radiation model was extended for the parametrization of the radiation transport in ice clouds. Now this model allows to calculate the radiation transport in water clouds as well as in ice clouds. Liquid and solid water phases can coexist according to a simple mixture statement. The results of a sensitivity study show a strong reaction of the cloud cover degree to changes in the relative humidity. Compared with this, variations of temperature and vertical wind velocity are of minor importance. The model of radiation transport reacts most sensitively to variations of the cloud cover degree and ice content. Changes of these two factors by about 20% lead to changes in the average warming rates in the order of magnitude of 0.1 K. (orig./KW) [de

  9. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  10. Multivariate geostatistical modeling of the spatial sediment distribution in a large scale drainage basin, Upper Rhone, Switzerland

    Science.gov (United States)

    Schoch, Anna; Blöthe, Jan Henrik; Hoffmann, Thomas; Schrott, Lothar

    2018-02-01

    volume focus on the broad glacially overdeepened Rhone Valley that accounts for c. 9% of our study area. While our data support its importance, we conservatively estimate that the remaining 90% of sediment cover, mainly located outside trunk valleys, account for a volume of 2.6-13 km3, i.e. 2-16% of the estimated sediment volume stored in the Rhone Valley between Brig and Lake Geneva. Furthermore, our data reveal increased relative sediment cover in areas deglaciated since the Little Ice Age, as compared to headwater regions without this recent glacial imprint. We therefore conclude that sediment storage in low-order valleys, often neglected in large scale studies, constitutes a significant component of large scale sediment budgets that needs to be better included into future analysis.

  11. A stochastic mathematical model to locate field hospitals under disruption uncertainty for large-scale disaster preparedness

    Directory of Open Access Journals (Sweden)

    Nezir Aydin

    2016-03-01

    Full Text Available In this study, we consider field hospital location decisions for emergency treatment points in response to large scale disasters. Specifically, we developed a two-stage stochastic model that determines the number and locations of field hospitals and the allocation of injured victims to these field hospitals. Our model considers the locations as well as the failings of the existing public hospitals while deciding on the location of field hospitals that are anticipated to be opened. The model that we developed is a variant of the P-median location model and it integrates capacity restrictions both on field hospitals that are planned to be opened and the disruptions that occur in existing public hospitals. We conducted experiments to demonstrate how the proposed model can be utilized in practice in a real life problem case scenario. Results show the effects of the failings of existing hospitals, the level of failure probability and the capacity of projected field hospitals to deal with the assessment of any given emergency treatment system’s performance. Crucially, it also specifically provides an assessment on the average distance within which a victim needs to be transferred in order to be treated properly and then from this assessment, the proportion of total satisfied demand is then calculated.

  12. Large-scale modeling on the fate and transport of polycyclic aromatic hydrocarbons (PAHs) in multimedia over China

    Science.gov (United States)

    Huang, Y.; Liu, M.; Wada, Y.; He, X.; Sun, X.

    2017-12-01

    In recent decades, with rapid economic growth, industrial development and urbanization, expanding pollution of polycyclic aromatic hydrocarbons (PAHs) has become a diversified and complicated phenomenon in China. However, the availability of sufficient monitoring activities for PAHs in multi-compartment and the corresponding multi-interface migration processes are still limited, especially at a large geographic area. In this study, we couple the Multimedia Fate Model (MFM) to the Community Multi-Scale Air Quality (CMAQ) model in order to consider the fugacity and the transient contamination processes. This coupled dynamic contaminant model can evaluate the detailed local variations and mass fluxes of PAHs in different environmental media (e.g., air, surface film, soil, sediment, water and vegetation) across different spatial (a county to country) and temporal (days to years) scales. This model has been applied to a large geographical domain of China at a 36 km by 36 km grid resolution. The model considers response characteristics of typical environmental medium to complex underlying surface. Results suggest that direct emission is the main input pathway of PAHs entering the atmosphere, while advection is the main outward flow of pollutants from the environment. In addition, both soil and sediment act as the main sink of PAHs and have the longest retention time. Importantly, the highest PAHs loadings are found in urbanized and densely populated regions of China, such as Yangtze River Delta and Pearl River Delta. This model can provide a good scientific basis towards a better understanding of the large-scale dynamics of environmental pollutants for land conservation and sustainable development. In a next step, the dynamic contaminant model will be integrated with the continental-scale hydrological and water resources model (i.e., Community Water Model, CWatM) to quantify a more accurate representation and feedbacks between the hydrological cycle and water quality at

  13. The application of ICOM, a non-hydrostatic, fully unstructured mesh model in large scale ocean domains

    Science.gov (United States)

    Kramer, Stephan C.; Piggott, Matthew D.; Cotter, Colin J.; Pain, Chris C.; Nelson, Rhodri B.

    2010-05-01

    given of some of the difficulties that were encountered in the application of ICOM in large scale, high aspect ratio ocean domains and how they have been overcome. A large scale application in the form of a baroclinic, wind-driven double gyre will be presented and the results are compared to two other models, the MIT general circulation model (MITgcm, [3]) and NEMO (Nucleus for European Modelling of the Ocean, [4]). Also a comparison of the performance and parallel scaling of the models on a supercomputing platform will be made. References [1] M.D. Piggott, G.J. Gorman, C.C. Pain, P.A. Allison, A.S. Candy, B.T. Martin and W.R. Wells, "A new computational framework for multi-scale ocean modelling based on adapting unstructured meshes", International Journal for Numerical Methods in Fluids 56, pp 1003 - 1015, 2008 [2] S.C. Kramer, C.J. Cotter and C.C. Pain, "Solving the Poisson equation on small aspect ratio domains using unstructured meshes", submitted to Ocean Modelling [3] J. Marshall, C. Hill, L. Perelman, and A. Adcroft, "Hydrostatic, quasi-hydrostatic, and nonhydrostatic ocean modeling", J. Geophysical Res., 102(C3), pp 5733-5752, 1997 [4] G. Madec, "NEMO ocean engine", Note du Pole de modélisation, Institut Pierre-Simon Laplace (IPSL), France, No 27 ISSN No 1288-1619

  14. New Techniques Used in Modeling the 2017 Total Solar Eclipse: Energizing and Heating the Large-Scale Corona

    Science.gov (United States)

    Downs, Cooper; Mikic, Zoran; Linker, Jon A.; Caplan, Ronald M.; Lionello, Roberto; Torok, Tibor; Titov, Viacheslav; Riley, Pete; Mackay, Duncan; Upton, Lisa

    2017-08-01

    Over the past two decades, our group has used a magnetohydrodynamic (MHD) model of the corona to predict the appearance of total solar eclipses. In this presentation we detail recent innovations and new techniques applied to our prediction model for the August 21, 2017 total solar eclipse. First, we have developed a method for capturing the large-scale energized fields typical of the corona, namely the sheared/twisted fields built up through long-term processes of differential rotation and flux-emergence/cancellation. Using inferences of the location and chirality of filament channels (deduced from a magnetofrictional model driven by the evolving photospheric field produced by the Advective Flux Transport model), we tailor a customized boundary electric field profile that will emerge shear along the desired portions of polarity inversion lines (PILs) and cancel flux to create long twisted flux systems low in the corona. This method has the potential to improve the morphological shape of streamers in the low solar corona. Second, we apply, for the first time in our eclipse prediction simulations, a new wave-turbulence-dissipation (WTD) based model for coronal heating. This model has substantially fewer free parameters than previous empirical heating models, but is inherently sensitive to the 3D geometry and connectivity of the coronal field---a key property for modeling/predicting the thermal-magnetic structure of the solar corona. Overall, we will examine the effect of these considerations on white-light and EUV observables from the simulations, and present them in the context of our final 2017 eclipse prediction model.Research supported by NASA's Heliophysics Supporting Research and Living With a Star Programs.

  15. Large-scale effects of migration and conflict in pre-agricultural groups: Insights from a dynamic model.

    Directory of Open Access Journals (Sweden)

    Francesco Gargano

    Full Text Available The debate on the causes of conflict in human societies has deep roots. In particular, the extent of conflict in hunter-gatherer groups remains unclear. Some authors suggest that large-scale violence only arose with the spreading of agriculture and the building of complex societies. To shed light on this issue, we developed a model based on operatorial techniques simulating population-resource dynamics within a two-dimensional lattice, with humans and natural resources interacting in each cell of the lattice. The model outcomes under different conditions were compared with recently available demographic data for prehistoric South America. Only under conditions that include migration among cells and conflict was the model able to consistently reproduce the empirical data at a continental scale. We argue that the interplay between resource competition, migration, and conflict drove the population dynamics of South America after the colonization phase and before the introduction of agriculture. The relation between population and resources indeed emerged as a key factor leading to migration and conflict once the carrying capacity of the environment has been reached.

  16. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Directory of Open Access Journals (Sweden)

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  17. Instantaneous Linkages between Clouds and Large-Scale Meteorology over the Southern Ocean in Observations and a Climate Model

    Energy Technology Data Exchange (ETDEWEB)

    Wall, Casey J. [Department of Atmospheric Sciences, University of Washington, Seattle, Washington; Hartmann, Dennis L. [Department of Atmospheric Sciences, University of Washington, Seattle, Washington; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington

    2017-12-01

    Instantaneous, coincident, footprint-level satellite observations of cloud properties and radiation taken during austral summer over the Southern Ocean are used to study relationships between clouds and large-scale meteorology. Cloud properties are very sensitive to the strength of vertical motion in the middle-troposphere, and low-cloud properties are sensitive to estimated inversion strength, low-level temperature advection, and sea surface temperature. These relationships are quantified. An index for the meteorological anomalies associated with midlatitude cyclones is presented, and it is used to reveal the sensitivity of clouds to the meteorology within the warm- and cold-sector of cyclones. The observed relationships between clouds and meteorology are compared to those in the Community Atmosphere Model version 5 (CAM5) using satellite simulators. Low-clouds simulated by CAM5 are too few, too bright, and contain too much ice, and low-clouds located in the cold-sector of cyclones are too sensitive to variations in the meteorology. The latter two biases are dramatically reduced when CAM5 is coupled with an updated boundary layer parameterization know as Cloud Layers Unified by Binormals (CLUBB). More generally, this study demonstrates that examining the instantaneous timescale is a powerful approach to understanding the physical processes that control clouds and how they are represented in climate models. Such an evaluation goes beyond the cloud climatology and exposes model bias under various meteorological conditions.

  18. Large-scale effects of migration and conflict in pre-agricultural groups: Insights from a dynamic model.

    Science.gov (United States)

    Gargano, Francesco; Tamburino, Lucia; Bagarello, Fabio; Bravo, Giangiacomo

    2017-01-01

    The debate on the causes of conflict in human societies has deep roots. In particular, the extent of conflict in hunter-gatherer groups remains unclear. Some authors suggest that large-scale violence only arose with the spreading of agriculture and the building of complex societies. To shed light on this issue, we developed a model based on operatorial techniques simulating population-resource dynamics within a two-dimensional lattice, with humans and natural resources interacting in each cell of the lattice. The model outcomes under different conditions were compared with recently available demographic data for prehistoric South America. Only under conditions that include migration among cells and conflict was the model able to consistently reproduce the empirical data at a continental scale. We argue that the interplay between resource competition, migration, and conflict drove the population dynamics of South America after the colonization phase and before the introduction of agriculture. The relation between population and resources indeed emerged as a key factor leading to migration and conflict once the carrying capacity of the environment has been reached.

  19. Scramjet test flow reconstruction for a large-scale expansion tube, Part 1: quasi-one-dimensional modelling

    Science.gov (United States)

    Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.

    2017-11-01

    Large-scale free-piston driven expansion tubes have uniquely high total pressure capabilities which make them an important resource for development of access-to-space scramjet engine technology. However, many aspects of their operation are complex, and their test flows are fundamentally unsteady and difficult to measure. While computational fluid dynamics methods provide an important tool for quantifying these flows, these calculations become very expensive with increasing facility size and therefore have to be carefully constructed to ensure sufficient accuracy is achieved within feasible computational times. This study examines modelling strategies for a Mach 10 scramjet test condition developed for The University of Queensland's X3 facility. The present paper outlines the challenges associated with test flow reconstruction, describes the experimental set-up for the X3 experiments, and then details the development of an experimentally tuned quasi-one-dimensional CFD model of the full facility. The 1-D model, which accurately captures longitudinal wave processes, is used to calculate the transient flow history in the shock tube. This becomes the inflow to a higher-fidelity 2-D axisymmetric simulation of the downstream facility, detailed in the Part 2 companion paper, leading to a validated, fully defined nozzle exit test flow.

  20. Large scale model predictions on the effect of GDL thermal conductivity and porosity on PEM fuel cell performance

    Directory of Open Access Journals (Sweden)

    Obaid ur Rehman

    2017-12-01

    Full Text Available The performance of proton exchange membrane (PEM fuel cell majorly relies on properties of gas diffusion layer (GDL which supports heat and mass transfer across the membrane electrode assembly. A novel approach is adopted in this work to analyze the activity of GDL during fuel cell operation on a large-scale model. The model with mesh size of 1.3 million computational cells for 50 cm2 active area was simulated by parallel computing technique via computer cluster. Grid independence study showed less than 5% deviation in criterion parameter as mesh size was increased to 1.8 million cells. Good approximation was achieved as model was validated with the experimental data for Pt loading of 1 mg cm-2. The results showed that GDL with higher thermal conductivity prevented PEM from drying and led to improved protonic conduction. GDL with higher porosity enhanced the reaction but resulted in low output voltage which demonstrated the effect of contact resistance. In addition, reduced porosity under the rib regions was significant which resulted in lower gas diffusion and heat and water accumulation.

  1. Large Scale Biologically Realistic Models of Cortical Microcircuit Dynamics with Application to Novel Statistical Classifiers (Pilot Investigation)

    National Research Council Canada - National Science Library

    Goodman, Philip

    2000-01-01

    The purpose of this project was to better understand brain-like network dynamics by incorporated biological parameters into large-scale computer simulations using parallel distributed "Beowulf" clustering...

  2. A Limited-Memory BFGS Algorithm Based on a Trust-Region Quadratic Model for Large-Scale Nonlinear Equations.

    Science.gov (United States)

    Li, Yong; Yuan, Gonglin; Wei, Zengxin

    2015-01-01

    In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method.

  3. Study of materials and machines for 3D printed large-scale, flexible electronic structures using fused deposition modeling

    Science.gov (United States)

    Hwang, Seyeon

    The 3 dimensional printing (3DP), called to additive manufacturing (AM) or rapid prototyping (RP), is emerged to revolutionize manufacturing and completely transform how products are designed and fabricated. A great deal of research activities have been carried out to apply this new technology to a variety of fields. In spite of many endeavors, much more research is still required to perfect the processes of the 3D printing techniques especially in the area of the large-scale additive manufacturing and flexible printed electronics. The principles of various 3D printing processes are briefly outlined in the Introduction Section. New types of thermoplastic polymer composites aiming to specified functional applications are also introduced in this section. Chapter 2 shows studies about the metal/polymer composite filaments for fused deposition modeling (FDM) process. Various metal particles, copper and iron particles, are added into thermoplastics polymer matrices as the reinforcement filler. The thermo-mechanical properties, such as thermal conductivity, hardness, tensile strength, and fracture mechanism, of composites are tested to figure out the effects of metal fillers on 3D printed composite structures for the large-scale printing process. In Chapter 3, carbon/polymer composite filaments are developed by a simple mechanical blending process with an aim of fabricating the flexible 3D printed electronics as a single structure. Various types of carbon particles consisting of multi-wall carbon nanotube (MWCNT), conductive carbon black (CCB), and graphite are used as the conductive fillers to provide the thermoplastic polyurethane (TPU) with improved electrical conductivity. The mechanical behavior and conduction mechanisms of the developed composite materials are observed in terms of the loading amount of carbon fillers in this section. Finally, the prototype flexible electronics are modeled and manufactured by the FDM process using Carbon/TPU composite filaments and

  4. Predicting Soil Infiltration and Horizon Thickness for a Large-Scale Water Balance Model in an Arid Environment

    Directory of Open Access Journals (Sweden)

    Tadaomi Saito

    2016-03-01

    Full Text Available Prediction of soil characteristics over large areas is desirable for environmental modeling. In arid environments, soil characteristics often show strong ecological connectivity with natural vegetation, specifically biomass and/or canopy cover, suggesting that the soil characteristics may be predicted from vegetation data. The objective of this study was to predict soil infiltration characteristics and horizon (soil layer thickness using vegetation data for a large-scale water balance model in an arid region. Double-ring infiltrometer tests (at 23 sites, horizon thickness measurements (58 sites and vegetation surveys (35 sites were conducted in a 30 km × 50 km area in Western Australia during 1999 to 2003. The relationships between soil parameters and vegetation data were evaluated quantitatively by simple linear regression. The parameters for initial-term infiltration had strong and positive correlations with biomass and canopy coverage (R2 = 0.64 − 0.81. The horizon thickness also had strong positive correlations with vegetation properties (R2 = 0.53 − 0.67. These results suggest that the soil infiltration parameters and horizon thickness can be spatially predicted by properties of vegetation using their linear regression based equations and vegetation maps. The background and reasons of the strong ecological connectivity between soil and vegetation in this region were also considered.

  5. Context-dependent encoding of fear and extinction memories in a large-scale network model of the basal amygdala.

    Directory of Open Access Journals (Sweden)

    Ioannis Vlachos

    2011-03-01

    Full Text Available The basal nucleus of the amygdala (BA is involved in the formation of context-dependent conditioned fear and extinction memories. To understand the underlying neural mechanisms we developed a large-scale neuron network model of the BA, composed of excitatory and inhibitory leaky-integrate-and-fire neurons. Excitatory BA neurons received conditioned stimulus (CS-related input from the adjacent lateral nucleus (LA and contextual input from the hippocampus or medial prefrontal cortex (mPFC. We implemented a plasticity mechanism according to which CS and contextual synapses were potentiated if CS and contextual inputs temporally coincided on the afferents of the excitatory neurons. Our simulations revealed a differential recruitment of two distinct subpopulations of BA neurons during conditioning and extinction, mimicking the activation of experimentally observed cell populations. We propose that these two subgroups encode contextual specificity of fear and extinction memories, respectively. Mutual competition between them, mediated by feedback inhibition and driven by contextual inputs, regulates the activity in the central amygdala (CEA thereby controlling amygdala output and fear behavior. The model makes multiple testable predictions that may advance our understanding of fear and extinction memories.

  6. Application of Large-Scale, Multi-Resolution Watershed Modeling Framework Using the Hydrologic and Water Quality System (HAWQS

    Directory of Open Access Journals (Sweden)

    Haw Yen

    2016-04-01

    Full Text Available In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources allocation, sediment transport, and pollution control. Among commonly adopted models, the Soil and Water Assessment Tool (SWAT has been demonstrated to provide superior performance with a large amount of referencing databases. However, it is cumbersome to perform tedious initialization steps such as preparing inputs and developing a model with each changing targeted study area. In this study, the Hydrologic and Water Quality System (HAWQS is introduced to serve as a national-scale Decision Support System (DSS to conduct challenging watershed modeling tasks. HAWQS is a web-based DSS developed and maintained by Texas A & M University, and supported by the U.S. Environmental Protection Agency. Three different spatial resolutions of Hydrologic Unit Code (HUC8, HUC10, and HUC12 and three temporal scales (time steps in daily/monthly/annual are available as alternatives for general users. In addition, users can specify preferred values of model parameters instead of using the pre-defined sets. With the aid of HAWQS, users can generate a preliminarily calibrated SWAT project within a few minutes by only providing the ending HUC number of the targeted watershed and the simulation period. In the case study, HAWQS was implemented on the Illinois River Basin, USA, with graphical demonstrations and associated analytical results. Scientists and/or decision-makers can take advantage of the HAWQS framework while conducting relevant topics or policies in the future.

  7. Coupling a distributed hydrological model with detailed forest structural information for large-scale global change impact assessment

    Science.gov (United States)

    Eisner, Stephanie; Huang, Shaochun; Majasalmi, Titta; Bright, Ryan; Astrup, Rasmus; Beldring, Stein

    2017-04-01

    Forests are recognized for their decisive effect on landscape water balance with structural forest characteristics as stand density or species composition determining energy partitioning and dominant flow paths. However, spatial and temporal variability in forest structure is often poorly represented in hydrological modeling frameworks, in particular in regional to large scale hydrological modeling and impact analysis. As a common practice, prescribed land cover classes (including different generic forest types) are linked to parameter values derived from literature, or parameters are determined by calibration. While national forest inventory (NFI) data provide comprehensive, detailed information on hydrologically relevant forest characteristics, their potential to inform hydrological simulation over larger spatial domains is rarely exploited. In this study we present a modeling framework that couples the distributed hydrological model HBV with forest structural information derived from the Norwegian NFI and multi-source remote sensing data. The modeling framework, set up for the entire of continental Norway at 1 km spatial resolution, is explicitly designed to study the combined and isolated impacts of climate change, forest management and land use change on hydrological fluxes. We use a forest classification system based on forest structure rather than biomes which allows to implicitly account for impacts of forest management on forest structural attributes. In the hydrological model, different forest classes are represented by three parameters: leaf area index (LAI), mean tree height and surface albedo. Seasonal cycles of LAI and surface albedo are dynamically simulated to make the framework applicable under climate change conditions. Based on a hindcast for the pilot regions Nord-Trøndelag and Sør-Trøndelag, we show how forest management has affected regional hydrological fluxes during the second half of the 20th century as contrasted to climate variability.

  8. Towards a Quantitative Use of Satellite Remote Sensing in Crop Growth Models for Large Scale Agricultural Production Estimate (Invited)

    Science.gov (United States)

    Defourny, P.

    2013-12-01

    such the Green Area Index (GAI), fAPAR and fcover usually retrieved from MODIS, MERIS, SPOT-Vegetation described the quality of the green vegetation development. The GLOBAM (Belgium) and EU FP-7 MOCCCASIN projects (Russia) improved the standard products and were demonstrated over large scale. The GAI retrieved from MODIS time series using a purity index criterion depicted successfully the inter-annual variability. Furthermore, the quantitative assimilation of these GAI time series into a crop growth model improved the yield estimate over years. These results showed that the GAI assimilation works best at the district or provincial level. In the context of the GEO Ag., the Joint Experiment of Crop Assessment and Monitoring (JECAM) was designed to enable the global agricultural monitoring community to compare such methods and results over a variety of regional cropping systems. For a network of test sites around the world, satellite and field measurements are currently collected and will be made available for collaborative effort. This experiment should facilitate international standards for data products and reporting, eventually supporting the development of a global system of systems for agricultural crop assessment and monitoring.

  9. The Climate Potentials and Side-Effects of Large-Scale terrestrial CO2 Removal - Insights from Quantitative Model Assessments

    Science.gov (United States)

    Boysen, L.; Heck, V.; Lucht, W.; Gerten, D.

    2015-12-01

    Terrestrial carbon dioxide removal (tCDR) through dedicated biomass plantations is considered as one climate engineering (CE) option if implemented at large-scale. While the risks and costs are supposed to be small, the effectiveness depends strongly on spatial and temporal scales of implementation. Based on simulations with a dynamic global vegetation model (LPJmL) we comprehensively assess the effectiveness, biogeochemical side-effects and tradeoffs from an earth system-analytic perspective. We analyzed systematic land-use scenarios in which all, 25%, or 10% of natural and/or agricultural areas are converted to tCDR plantations including the assumption that biomass plantations are established once the 2°C target is crossed in a business-as-usual climate change trajectory. The resulting tCDR potentials in year 2100 include the net accumulated annual biomass harvests and changes in all land carbon pools. We find that only the most spatially excessive, and thus undesirable, scenario would be capable to restore the 2° target by 2100 under continuing high emissions (with a cooling of 3.02°C). Large-scale biomass plantations covering areas between 1.1 - 4.2 Gha would produce a climate reduction potential of 0.8 - 1.4°C. tCDR plantations at smaller scales do not build up enough biomass over this considered period and the potentials to achieve global warming reductions are substantially lowered to no more than 0.5-0.6°C. Finally, we demonstrate that the (non-economic) costs for the Earth system include negative impacts on the water cycle and on ecosystems, which are already under pressure due to both land use change and climate change. Overall, tCDR may lead to a further transgression of land- and water-related planetary boundaries while not being able to set back the crossing of the planetary boundary for climate change. tCDR could still be considered in the near-future mitigation portfolio if implemented on small scales on wisely chosen areas.

  10. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  11. Large-scale subject-specific cerebral arterial tree modeling using automated parametric mesh generation for blood flow simulation.

    Science.gov (United States)

    Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A

    2017-12-01

    In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  13. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  14. Large-scale dynamical influence of a gravity wave generated over the Antarctic Peninsula – regional modelling and budget analysis

    Directory of Open Access Journals (Sweden)

    JOEL Arnault

    2013-03-01

    Full Text Available The case study of a mountain wave triggered by the Antarctic Peninsula on 6 October 2005, which has already been documented in the literature, is chosen here to quantify the associated gravity wave forcing on the large-scale flow, with a budget analysis of the horizontal wind components and horizontal kinetic energy. In particular, a numerical simulation using the Weather Research and Forecasting (WRF model is compared to a control simulation with flat orography to separate the contribution of the mountain wave from that of other synoptic processes of non-orographic origin. The so-called differential budgets of horizontal wind components and horizontal kinetic energy (after subtracting the results from the simulation without orography are then averaged horizontally and vertically in the inner domain of the simulation to quantify the mountain wave dynamical influence at this scale. This allows for a quantitative analysis of the simulated mountain wave's dynamical influence, including the orographically induced pressure drag, the counterbalancing wave-induced vertical transport of momentum from the flow aloft, the momentum and energy exchanges with the outer flow at the lateral and upper boundaries, the effect of turbulent mixing, the dynamics associated with geostrophic re-adjustment of the inner flow, the deceleration of the inner flow, the secondary generation of an inertia–gravity wave and the so-called baroclinic conversion of energy between potential energy and kinetic energy.

  15. Large-scale Watershed Modeling: NHDPlus Resolution with Achievable Conservation Scenarios in the Western Lake Erie Basin

    Science.gov (United States)

    Yen, H.; White, M. J.; Arnold, J. G.; Keitzer, S. C.; Johnson, M. V. V.; Atwood, J. D.; Daggupati, P.; Herbert, M. E.; Sowa, S. P.; Ludsin, S.; Robertson, D. M.; Srinivasan, R.; Rewa, C. A.

    2016-12-01

    By the substantial improvement of computer technology, large-scale watershed modeling has become practically feasible in conducting detailed investigations of hydrologic, sediment, and nutrient processes. In the Western Lake Erie Basin (WLEB), water quality issues caused by anthropogenic activities are not just interesting research subjects but, have implications related to human health and welfare, as well as ecological integrity, resistance, and resilience. In this study, the Soil and Water Assessment Tool (SWAT) and the finest resolution stream network, NHDPlus, were implemented on the WLEB to examine the interactions between achievable conservation scenarios with corresponding additional projected costs. During the calibration/validation processes, both hard (temporal) and soft (non-temporal) data were used to ensure the modeling outputs are coherent with actual watershed behavior. The results showed that widespread adoption of conservation practices intended to provide erosion control could deliver average reductions of sediment and nutrients without additional nutrient management changes. On the other hand, responses of nitrate (NO3) and dissolved inorganic phosphorus (DIP) dynamics may be different than responses of total nitrogen and total phosphorus dynamics under the same conservation practice. Model results also implied that fewer financial resources are required to achieve conservation goals if the goal is to achieve reductions in targeted watershed outputs (ex. NO3 or DIP) rather than aggregated outputs (ex. total nitrogen or total phosphorus). In addition, it was found that the model's capacity to simulate seasonal effects and responses to changing conservation adoption on a seasonal basis could provide a useful index to help alleviate additional cost through temporal targeting of conservation practices. Scientists, engineers, and stakeholders can take advantage of the work performed in this study as essential information while conducting policy

  16. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESUL...

  17. Business Model for the Security of a Large-Scale PACS, Compliance with ISO/27002:2013 Standard.

    Science.gov (United States)

    Gutiérrez-Martínez, Josefina; Núñez-Gaona, Marco Antonio; Aguirre-Meneses, Heriberto

    2015-08-01

    Data security is a critical issue in an organization; a proper information security management (ISM) is an ongoing process that seeks to build and maintain programs, policies, and controls for protecting information. A hospital is one of the most complex organizations, where patient information has not only legal and economic implications but, more importantly, an impact on the patient's health. Imaging studies include medical images, patient identification data, and proprietary information of the study; these data are contained in the storage device of a PACS. This system must preserve the confidentiality, integrity, and availability of patient information. There are techniques such as firewalls, encryption, and data encapsulation that contribute to the protection of information. In addition, the Digital Imaging and Communications in Medicine (DICOM) standard and the requirements of the Health Insurance Portability and Accountability Act (HIPAA) regulations are also used to protect the patient clinical data. However, these techniques are not systematically applied to the picture and archiving and communication system (PACS) in most cases and are not sufficient to ensure the integrity of the images and associated data during transmission. The ISO/IEC 27001:2013 standard has been developed to improve the ISM. Currently, health institutions lack effective ISM processes that enable reliable interorganizational activities. In this paper, we present a business model that accomplishes the controls of ISO/IEC 27002:2013 standard and criteria of security and privacy from DICOM and HIPAA to improve the ISM of a large-scale PACS. The methodology associated with the model can monitor the flow of data in a PACS, facilitating the detection of unauthorized access to images and other abnormal activities.

  18. DEVELOPMENT AND ADAPTATION OF VORTEX REALIZABLE MEASUREMENT SYSTEM FOR BENCHMARK TEST WITH LARGE SCALE MODEL OF NUCLEAR REACTOR

    Directory of Open Access Journals (Sweden)

    S. M. Dmitriev

    2017-01-01

    Full Text Available The last decades development of applied calculation methods of nuclear reactor thermal and hydraulic processes are marked by the rapid growth of the High Performance Computing (HPC, which contribute to the active introduction of Computational Fluid Dynamics (CFD. The use of such programs to justify technical and economic parameters and especially the safety of nuclear reactors requires comprehensive verification of mathematical models and CFD programs. The aim of the work was the development and adaptation of a measuring system having the characteristics necessary for its application in the verification test (experimental facility. It’s main objective is to study the processes of coolant flow mixing with different physical properties (for example, the concentration of dissolved impurities inside a large-scale reactor model. The basic method used for registration of the spatial concentration field in the mixing area is the method of spatial conductometry. In the course of the work, a measurement complex, including spatial conductometric sensors, a system of secondary converters and software, was created. Methods of calibration and normalization of measurement results are developed. Averaged concentration fields, nonstationary realizations of the measured local conductivity were obtained during the first experimental series, spectral and statistical analysis of the realizations were carried out.The acquired data are compared with pretest CFD-calculations performed in the ANSYS CFX program. A joint analysis of the obtained results made it possible to identify the main regularities of the process under study, and to demonstrate the capabilities of the designed measuring system to receive the experimental data of the «CFD-quality» required for verification.The carried out adaptation of spatial sensors allows to conduct a more extensive program of experimental tests, on the basis of which a databank and necessary generalizations will be created

  19. Air entrapment in piezo-driven inkjet printheads

    NARCIS (Netherlands)

    de Jong, J.; de Bruin, G.J.; de Bruin, Gerrit; Reinten, Hans; van den Berg, Marc; Wijshoff, Herman; Wijshoff, H.; Versluis, Michel; Lohse, Detlef

    2006-01-01

    The stability of inkjet printers is a major requirement for high-quality-printing. However, in piezo-driven inkjet printheads, air entrapment can lead to malfunctioning of the jet formation. The piezoactuator is employed to actively monitor the channel acoustics and to identify distortions at an

  20. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  1. Mediterranean Thermohaline Response to Large-Scale Winter Atmospheric Forcing in a High-Resolution Ocean Model Simulation

    Science.gov (United States)

    Cusinato, Eleonora; Zanchettin, Davide; Sannino, Gianmaria; Rubino, Angelo

    2018-04-01

    Large-scale circulation anomalies over the North Atlantic and Euro-Mediterranean regions described by dominant climate modes, such as the North Atlantic Oscillation (NAO), the East Atlantic pattern (EA), the East Atlantic/Western Russian (EAWR) and the Mediterranean Oscillation Index (MOI), significantly affect interannual-to-decadal climatic and hydroclimatic variability in the Euro-Mediterranean region. However, whereas previous studies assessed the impact of such climate modes on air-sea heat and freshwater fluxes in the Mediterranean Sea, the propagation of these atmospheric forcing signals from the surface toward the interior and the abyss of the Mediterranean Sea remains unexplored. Here, we use a high-resolution ocean model simulation covering the 1979-2013 period to investigate spatial patterns and time scales of the Mediterranean thermohaline response to winter forcing from NAO, EA, EAWR and MOI. We find that these modes significantly imprint on the thermohaline properties in key areas of the Mediterranean Sea through a variety of mechanisms. Typically, density anomalies induced by all modes remain confined in the upper 600 m depth and remain significant for up to 18-24 months. One of the clearest propagation signals refers to the EA in the Adriatic and northern Ionian seas: There, negative EA anomalies are associated to an extensive positive density response, with anomalies that sink to the bottom of the South Adriatic Pit within a 2-year time. Other strong responses are the thermally driven responses to the EA in the Gulf of Lions and to the EAWR in the Aegean Sea. MOI and EAWR forcing of thermohaline properties in the Eastern Mediterranean sub-basins seems to be determined by reinforcement processes linked to the persistency of these modes in multiannual anomalous states. Our study also suggests that NAO, EA, EAWR and MOI could critically interfere with internal, deep and abyssal ocean dynamics and variability in the Mediterranean Sea.

  2. Techno-economic Modeling of the Integration of 20% Wind and Large-scale Energy Storage in ERCOT by 2030

    Energy Technology Data Exchange (ETDEWEB)

    Baldick, Ross; Webber, Michael; King, Carey; Garrison, Jared; Cohen, Stuart; Lee, Duehee

    2012-12-21

    This study's objective is to examine interrelated technical and economic avenues for the Electric Reliability Council of Texas (ERCOT) grid to incorporate up to and over 20% wind generation by 2030. Our specific interests are to look at the factors that will affect the implementation of both high level of wind power penetration (> 20% generation) and installation of large scale storage.

  3. The "AQUASCOPE" simplified model for predicting 89, 90Sr, 131l and 134, 137Cs in surface waters after a large-scale radioactive fallout

    NARCIS (Netherlands)

    Smith, J.T.; Belova, N.V.; Bulgakov, A.A.; Comans, R.N.J.; Konoplev, A.V.; Kudelsky, A.V.; Madruga, M.J.; Voitsekhovitch, O.V.; Zibolt, G.

    2005-01-01

    Simplified dynamic models have been developed for predicting the concentrations of radiocesium, radiostrontium, and 131I in surface waters and freshwater fish following a large-scale radioactive fallout. The models are intended to give averaged estimates for radionuclides in water bodies and in fish

  4. Applicability and limitations of large-scale ice-sheet modeling for constraining subglacial geothermal heat flux

    Science.gov (United States)

    Rogozhina, I.; Hagedoorn, J. M.; Martinec, Z.; Fleming, K.; Thomas, M.

    2012-04-01

    In recent years, a number of studies have addressed the problem of constraining subglacial geothermal heat flow (SGHF) patterns within the context of thermodynamic ice-sheet modeling. This study reports on the potential of today's ice-sheet modeling methods and, more importantly, their limitations, with respect to reproducing the thermal states of the present-day large-scale ice sheets. So far, SGHF-related ice-sheet studies have suggested two alternative approaches for obtaining the present-day ice-sheet temperature distribution: (i) paleoclimatic simulations driven by the past surface temperature reconstructions, and (ii) fixed-topography steady-state simulations driven by the present-day climate conditions. Both approaches suffer from a number of shortcomings that are not easily amended. Paleoclimatic simulations account for past climate variations and produce more realistic present-day ice temperature distribution. However, in some areas, our knowledge of past climate forcing is subject to larger uncertainties that exert a significant influence on both the modeled basal temperatures and ice thicknesses, as demonstrated by our sensitivity case study applied to the Greenland Ice Sheet (GIS). In some regions of the GIS, for example southern Greenland, the poorly known climate forcing causes a significant deviation of the modeled ice thickness from the measured values (up to 200 meters) and makes it impossible to fit the measured basal temperature and gradient unless the climate history forcing is improved. Since present-day ice thickness is a product of both climate history and SGHF forcing, uncertainties in either boundary condition integrated over the simulation time will lead to a misfit between the modeled and observed ice sheets. By contrast, the fixed-topography steady-state approach allows one to avoid the above-mentioned transient effects and fit perfectly the observed present-day ice surface topography. However, the temperature distribution resulting from

  5. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  6. Expanded Large-Scale Forcing Properties Derived from the Multiscale Data Assimilation System and Its Application to Single-Column Models

    Science.gov (United States)

    Feng, S.; Li, Z.; Liu, Y.; Lin, W.; Toto, T.; Vogelmann, A. M.; Fridlind, A. M.

    2013-12-01

    We present an approach to derive large-scale forcing that is used to drive single-column models (SCMs) and cloud resolving models (CRMs)/large eddy simulation (LES) for evaluating fast physics parameterizations in climate models. The forcing fields are derived by use of a newly developed multi-scale data assimilation (MS-DA) system. This DA system is developed on top of the NCEP Gridpoint Statistical Interpolation (GSI) System and is implemented in the Weather Research and Forecasting (WRF) model at a cloud resolving resolution of 2 km. This approach has been applied to the generation of large scale forcing for a set of Intensive Operation Periods (IOPs) over the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plains (SGP) site. The dense ARM in-situ observations and high-resolution satellite data effectively constrain the WRF model. The evaluation shows that the derived forcing displays accuracies comparable to the existing continuous forcing product and, overall, a better dynamic consistency with observed cloud and precipitation. One important application of this approach is to derive large-scale hydrometeor forcing and multiscale forcing, which is not provided in the existing continuous forcing product. It is shown that the hydrometeor forcing poses an appreciable impact on cloud and precipitation fields in the single-column model simulations. The large-scale forcing exhibits a significant dependency on domain-size that represents SCM grid-sizes. Subgrid processes often contribute a significant component to the large-scale forcing, and this contribution is sensitive to the grid-size and cloud-regime.

  7. Diversity in the representation of large-scale circulation associated with ENSO-Indian summer monsoon teleconnections in CMIP5 models

    Science.gov (United States)

    Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.

    2018-04-01

    Realistic simulation of large-scale circulation patterns associated with El Niño-Southern Oscillation (ENSO) is vital in coupled models in order to represent teleconnections to different regions of globe. The diversity in representing large-scale circulation patterns associated with ENSO-Indian summer monsoon (ISM) teleconnections in 23 Coupled Model Intercomparison Project Phase 5 (CMIP5) models is examined. CMIP5 models have been classified into three groups based on the correlation between Niño3.4 sea surface temperature (SST) index and ISM rainfall anomalies, models in group 1 (G1) overestimated El Niño-ISM teleconections and group 3 (G3) models underestimated it, whereas these teleconnections are better represented in group 2 (G2) models. Results show that in G1 models, El Niño-induced Tropical Indian Ocean (TIO) SST anomalies are not well represented. Anomalous low-level anticyclonic circulation anomalies over the southeastern TIO and western subtropical northwest Pacific (WSNP) cyclonic circulation are shifted too far west to 60° E and 120° E, respectively. This bias in circulation patterns implies dry wind advection from extratropics/midlatitudes to Indian subcontinent. In addition to this, large-scale upper level convergence together with lower level divergence over ISM region corresponding to El Niño are stronger in G1 models than in observations. Thus, unrealistic shift in low-level circulation centers corroborated by upper level circulation changes are responsible for overestimation of ENSO-ISM teleconnections in G1 models. Warm Pacific SST anomalies associated with El Niño are shifted too far west in many G3 models unlike in the observations. Further large-scale circulation anomalies over the Pacific and ISM region are misrepresented during El Niño years in G3 models. Too strong upper-level convergence away from Indian subcontinent and too weak WSNP cyclonic circulation are prominent in most of G3 models in which ENSO-ISM teleconnections are

  8. Biodiversity and Climate Modeling Workshop Series: Identifying gaps and needs for improving large-scale biodiversity models

    Science.gov (United States)

    Weiskopf, S. R.; Myers, B.; Beard, T. D.; Jackson, S. T.; Tittensor, D.; Harfoot, M.; Senay, G. B.

    2017-12-01

    At the global scale, well-accepted global circulation models and agreed-upon scenarios for future climate from the Intergovernmental Panel on Climate Change (IPCC) are available. In contrast, biodiversity modeling at the global scale lacks analogous tools. While there is great interest in development of similar bodies and efforts for international monitoring and modelling of biodiversity at the global scale, equivalent modelling tools are in their infancy. This lack of global biodiversity models compared to the extensive array of general circulation models provides a unique opportunity to bring together climate, ecosystem, and biodiversity modeling experts to promote development of integrated approaches in modeling global biodiversity. Improved models are needed to understand how we are progressing towards the Aichi Biodiversity Targets, many of which are not on track to meet the 2020 goal, threatening global biodiversity conservation, monitoring, and sustainable use. We brought together biodiversity, climate, and remote sensing experts to try to 1) identify lessons learned from the climate community that can be used to improve global biodiversity models; 2) explore how NASA and other remote sensing products could be better integrated into global biodiversity models and 3) advance global biodiversity modeling, prediction, and forecasting to inform the Aichi Biodiversity Targets, the 2030 Sustainable Development Goals, and the Intergovernmental Platform on Biodiversity and Ecosystem Services Global Assessment of Biodiversity and Ecosystem Services. The 1st In-Person meeting focused on determining a roadmap for effective assessment of biodiversity model projections and forecasts by 2030 while integrating and assimilating remote sensing data and applying lessons learned, when appropriate, from climate modeling. Here, we present the outcomes and lessons learned from our first E-discussion and in-person meeting and discuss the next steps for future meetings.

  9. Modelling of large-scale dense gas-solid bubbling fluidised beds using a novel discrete bubble model

    NARCIS (Netherlands)

    Bokkers, G.A.; Laverman, J.A.; van Sint Annaland, M.; Kuipers, J.A.M.

    2006-01-01

    In order to model the complex hydrodynamic phenomena prevailing in industrial scale gas–solid bubbling fluidised bed reactors and especially the macro-scale emulsion phase circulation patterns induced by bubble–bubble interactions and bubble coalescence, a discrete bubble model (DBM) has been

  10. Modeling and Coordinated Control Strategy of Large Scale Grid-Connected Wind/Photovoltaic/Energy Storage Hybrid Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Lingguo Kong

    2015-01-01

    Full Text Available An AC-linked large scale wind/photovoltaic (PV/energy storage (ES hybrid energy conversion system for grid-connected application was proposed in this paper. Wind energy conversion system (WECS and PV generation system are the primary power sources of the hybrid system. The ES system, including battery and fuel cell (FC, is used as a backup and a power regulation unit to ensure continuous power supply and to take care of the intermittent nature of wind and photovoltaic resources. Static synchronous compensator (STATCOM is employed to support the AC-linked bus voltage and improve low voltage ride through (LVRT capability of the proposed system. An overall power coordinated control strategy is designed to manage real-power and reactive-power flows among the different energy sources, the storage unit, and the STATCOM system in the hybrid system. A simulation case study carried out on Western System Coordinating Council (WSCC 3-machine 9-bus test system for the large scale hybrid energy conversion system has been developed using the DIgSILENT/Power Factory software platform. The hybrid system performance under different scenarios has been verified by simulation studies using practical load demand profiles and real weather data.

  11. Alternative projections of the impacts of private investment on southern forests: a comparison of two large-scale forest sector models of the United States.

    Science.gov (United States)

    Ralph Alig; Darius Adams; John Mills; Richard Haynes; Peter Ince; Robert. Moulton

    2001-01-01

    The TAMM/NAPAP/ATLAS/AREACHANGE(TNAA) system and the Forest and Agriculture Sector Optimization Model (FASOM) are two large-scale forestry sector modeling systems that have been employed to analyze the U.S. forest resource situation. The TNAA system of static, spatial equilibrium models has been applied to make SO-year projections of the U.S. forest sector for more...

  12. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  13. Investigation of Prediction Accuracy, Sensitivity, and Parameter Stability of Large-Scale Propagation Path Loss Models for 5G Wireless Communications

    DEFF Research Database (Denmark)

    Sun, Shu; Rappaport, Theodore S.; Thomas, Timothy

    2016-01-01

    This paper compares three candidate large-scale propagation path loss models for use over the entire microwave and millimeter-wave (mmWave) radio spectrum: the alpha–beta–gamma (ABG) model, the close-in (CI) free-space reference distance model, and the CI model with a frequency-weighted path loss...... the accuracy and sensitivity of these models using measured data from 30 propagation measurement data sets from 2 to 73 GHz over distances ranging from 4 to 1238 m. A series of sensitivity analyses of the three models shows that the physically based two-parameter CI model and three-parameter CIF model offer...

  14. Beyond single syllables: large-scale modeling of reading aloud with the Connectionist Dual Process (CDP++) model.

    Science.gov (United States)

    Perry, Conrad; Ziegler, Johannes C; Zorzi, Marco

    2010-09-01

    Most words in English have more than one syllable, yet the most influential computational models of reading aloud are restricted to processing monosyllabic words. Here, we present CDP++, a new version of the Connectionist Dual Process model (Perry, Ziegler, & Zorzi, 2007). CDP++ is able to simulate the reading aloud of mono- and disyllabic words and nonwords, and learns to assign stress in exactly the same way as it learns to associate graphemes with phonemes. CDP++ is able to simulate the monosyllabic benchmark effects its predecessor could, and therefore shows full backwards compatibility. CDP++ also accounts for a number of novel effects specific to disyllabic words, including the effects of stress regularity and syllable number. In terms of database performance, CDP++ accounts for over 49% of the reaction time variance on items selected from the English Lexicon Project, a very large database of several thousand of words. With its lexicon of over 32,000 words, CDP++ is therefore a notable example of the successful scaling-up of a connectionist model to a size that more realistically approximates the human lexical system. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  16. Data-Driven Diffusion Of Innovations: Successes And Challenges In 3 Large-Scale Innovative Delivery Models

    Science.gov (United States)

    Dorr, David A.; Cohen, Deborah J.; Adler-Milstein, Julia

    2018-01-01

    Failed diffusion of innovations may be linked to an inability to use and apply data, information, and knowledge to change perceptions of current practice and motivate change. Using qualitative and quantitative data from three large-scale health care delivery innovations—accountable care organizations, advanced primary care practice, and EvidenceNOW—we assessed where data-driven innovation is occurring and where challenges lie. We found that implementation of some technological components of innovation (for example, electronic health records) has occurred among health care organizations, but core functions needed to use data to drive innovation are lacking. Deficits include the inability to extract and aggregate data from the records; gaps in sharing data; and challenges in adopting advanced data functions, particularly those related to timely reporting of performance data. The unexpectedly high costs and burden incurred during implementation of the innovations have limited organizations’ ability to address these and other deficits. Solutions that could help speed progress in data-driven innovation include facilitating peer-to-peer technical assistance, providing tailored feedback reports to providers from data aggregators, and using practice facilitators skilled in using data technology for quality improvement to help practices transform. Policy efforts that promote these solutions may enable more rapid uptake of and successful participation in innovative delivery system reforms. PMID:29401031

  17. 3D modelling of VLF radio wave propagation in terrestrial waveguide allowing for localized large-scale ionosphere perturbation

    Science.gov (United States)

    Soloviev, O.

    2003-03-01

    The problem of radio wave propagation allowing for 3D localized lower ionosphere irregularity appears in accordance with the necessity of the theoretical interpretation of VLF remote sensing data. The various processes in the Earth's crust and in space (earthquakes, magnetic storms, sporadic E-layers, lightning induced electron precipitations, rocket launches, artificial ionosphere heating, nuclear explosions, etc.) may cause different power and size ionospheric disturbances. This paper presents a further development of the numerical-analytical method for 3D problem solving. We consider a vector problem of VLF vertical electric dipole field in a plane Earth-ionosphere waveguide with a localized anisotropic ionosphere irregularity. The possibility of lowering (elevating) of the local region of the upper waveguide wall is taken into account. The field components on the boundary surfaces obey the Leontovich impedance conditions. The problem is reduced to a system of 2D integral equations taking into account the depolarization of the field scattered by the irregularity. Using asymptotic /(kr>>1) integration along the direction perpendicular to the propagation path, we transform this system to a system of 1D integral equations. The system is solved in the diagonal approximation, combining direct inversion of the Volterra integral operator and the subsequent iterations. The proposed method is useful for study of both small-scale and large-scale irregularities. We obtained estimates of the TE field components that originate entirely from field scattering by a 3D irregularity.

  18. Large-Scale Mass Spectrometry Imaging Investigation of Consequences of Cortical Spreading Depression in a Transgenic Mouse Model of Migraine

    Science.gov (United States)

    Carreira, Ricardo J.; Shyti, Reinald; Balluff, Benjamin; Abdelmoula, Walid M.; van Heiningen, Sandra H.; van Zeijl, Rene J.; Dijkstra, Jouke; Ferrari, Michel D.; Tolner, Else A.; McDonnell, Liam A.; van den Maagdenberg, Arn M. J. M.

    2015-06-01

    Cortical spreading depression (CSD) is the electrophysiological correlate of migraine aura. Transgenic mice carrying the R192Q missense mutation in the Cacna1a gene, which in patients causes familial hemiplegic migraine type 1 (FHM1), exhibit increased propensity to CSD. Herein, mass spectrometry imaging (MSI) was applied for the first time to an animal cohort of transgenic and wild type mice to study the biomolecular changes following CSD in the brain. Ninety-six coronal brain sections from 32 mice were analyzed by MALDI-MSI. All MSI datasets were registered to the Allen Brain Atlas reference atlas of the mouse brain so that the molecular signatures of distinct brain regions could be compared. A number of metabolites and peptides showed substantial changes in the brain associated with CSD. Among those, different mass spectral features showed significant ( t-test, P migraine pathophysiology. The results also demonstrate the utility of aligning MSI datasets to a common reference atlas for large-scale MSI investigations.

  19. Applying the Halo Model to Large Scale Structure Measurements of the Luminous Red Galaxies: SDSS DR7 Preliminary Results

    International Nuclear Information System (INIS)

    Reid, Beth A.

    2009-01-01

    The non-trivial relationship between observations of galaxy positions in redshift space and the underlying matter field complicates our ability to determine the linear theory power spectrum and extract cosmological information from galaxy surveys. The Sloan Digital Sky Survey (SDSS) Luminous Red Galaxy (LRG) catalog has the potential to place powerful constraints on cosmological parameters. LRGs are bright, highly biased tracers of large-scale structure. However, because they are highly biased, the non-linear contribution of satellite galaxies to the galaxy power spectrum is large and Fingers-of-God are significant. We propose an new approach to recovering the matter field from galaxy observations. Our approach is to use halos rather than galaxies to trace the underlying mass distribution. We identify Fingers-of-God (FOGs) and replace each FOG with a single halo object. This removes the nonlinear contribution of satellite galaxies, the one-halo term. We test our method on a large set of high-fidelity mock SDSS LRG catalogs and present consistency checks between the mock and LRG DR7 reconstructed halo density fields. We present preliminary cosmological constraints from the LRG DR7 reconstructed halo density field power spectrum. Finally, we summarize the potential gains in cosmological parameter constraints using our approach and the largest remaining sources of systematic errors.

  20. Sparse network modeling and metscape-based visualization methods for the analysis of large-scale metabolomics data.

    Science.gov (United States)

    Basu, Sumanta; Duren, William; Evans, Charles R; Burant, Charles F; Michailidis, George; Karnovsky, Alla

    2017-05-15

    Recent technological advances in mass spectrometry, development of richer mass spectral libraries and data processing tools have enabled large scale metabolic profiling. Biological interpretation of metabolomics studies heavily relies on knowledge-based tools that contain information about metabolic pathways. Incomplete coverage of different areas of metabolism and lack of information about non-canonical connections between metabolites limits the scope of applications of such tools. Furthermore, the presence of a large number of unknown features, which cannot be readily identified, but nonetheless can represent bona fide compounds, also considerably complicates biological interpretation of the data. Leveraging recent developments in the statistical analysis of high-dimensional data, we developed a new Debiased Sparse Partial Correlation algorithm (DSPC) for estimating partial correlation networks and implemented it as a Java-based CorrelationCalculator program. We also introduce a new version of our previously developed tool Metscape that enables building and visualization of correlation networks. We demonstrate the utility of these tools by constructing biologically relevant networks and in aiding identification of unknown compounds. http://metscape.med.umich.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  1. Development of a Shipboard Remote Control and Telemetry Experimental System for Large-Scale Model's Motions and Loads Measurement in Realistic Sea Waves.

    Science.gov (United States)

    Jiao, Jialong; Ren, Huilong; Adenya, Christiaan Adika; Chen, Chaohe

    2017-10-29

    Wave-induced motion and load responses are important criteria for ship performance evaluation. Physical experiments have long been an indispensable tool in the predictions of ship's navigation state, speed, motions, accelerations, sectional loads and wave impact pressure. Currently, majority of the experiments are conducted in laboratory tank environment, where the wave environments are different from the realistic sea waves. In this paper, a laboratory tank testing system for ship motions and loads measurement is reviewed and reported first. Then, a novel large-scale model measurement technique is developed based on the laboratory testing foundations to obtain accurate motion and load responses of ships in realistic sea conditions. For this purpose, a suite of advanced remote control and telemetry experimental system was developed in-house to allow for the implementation of large-scale model seakeeping measurement at sea. The experimental system includes a series of technique sensors, e.g., the Global Position System/Inertial Navigation System (GPS/INS) module, course top, optical fiber sensors, strain gauges, pressure sensors and accelerometers. The developed measurement system was tested by field experiments in coastal seas, which indicates that the proposed large-scale model testing scheme is capable and feasible. Meaningful data including ocean environment parameters, ship navigation state, motions and loads were obtained through the sea trial campaign.

  2. 76 FR 66964 - Certain Inkjet Ink Cartridges With Printheads and Components Thereof; Notice of the Commission's...

    Science.gov (United States)

    2011-10-28

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-723] Certain Inkjet Ink Cartridges With... exclusion order prohibiting importation of infringing inkjet ink cartridges with printheads and components... inkjet ink cartridges with printheads and components thereof by reason of infringement of various claims...

  3. A continental-scale hydrology and water quality model for Europe: Calibration and uncertainty of a high-resolution large-scale SWAT model

    Science.gov (United States)

    Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.

    2015-05-01

    A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.

  4. An industrial perspective on bioreactor scale-down: what we can learn from combined large-scale bioprocess and model fluid studies.

    Science.gov (United States)

    Noorman, Henk

    2011-08-01

    For industrial bioreactor design, operation, control and optimization, the scale-down approach is often advocated to efficiently generate data on a small scale, and effectively apply suggested improvements to the industrial scale. In all cases it is important to ensure that the scale-down conditions are representative of the real large-scale bioprocess. Progress is hampered by limited detailed and local information from large-scale bioprocesses. Complementary to real fermentation studies, physical aspects of model fluids such as air-water in large bioreactors provide useful information with limited effort and cost. Still, in industrial practice, investments of time, capital and resources often prohibit systematic work, although, in the end, savings obtained in this way are trivial compared to the expenses that result from real process disturbances, batch failures, and non-flyers with loss of business opportunity. Here we try to highlight what can be learned from real large-scale bioprocess in combination with model fluid studies, and to provide suitable computation tools to overcome data restrictions. Focus is on a specific well-documented case for a 30-m(3) bioreactor. Areas for further research from an industrial perspective are also indicated. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  6. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  7. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  8. Computation of large scale currents in the Arabian Sea during winter using a semi-diagnostic model

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    A 3-dimensional, semi-diagnostic model with 331 levels in the vertical has been used for the computation of climatic circulation in the western tropical Indian Ocean. Model is driven with the seasonal mean data on wind stress, temperature...

  9. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    Science.gov (United States)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model

  10. Mathematical modelling and optimization of a large-scale combined cooling, heat, and power system that incorporates unit changeover and time-of-use electricity price

    International Nuclear Information System (INIS)

    Zhu, Qiannan; Luo, Xianglong; Zhang, Bingjian; Chen, Ying

    2017-01-01

    Highlights: • We propose a novel superstructure for the design and optimization of LSCCHP. • A multi-objective multi-period MINLP model is formulated. • The unit start-up cost and time-of-use electricity prices are involved. • Unit size discretization strategy is proposed to linearize the original MINLP model. • A case study is elaborated to demonstrate the effectiveness of the proposed method. - Abstract: Building energy systems, particularly large public ones, are major energy consumers and pollutant emission contributors. In this study, a superstructure of large-scale combined cooling, heat, and power system is constructed. The off-design unit, economic cost, and CO 2 emission models are also formulated. Moreover, a multi-objective mixed integer nonlinear programming model is formulated for the simultaneous system synthesis, technology selection, unit sizing, and operation optimization of large-scale combined cooling, heat, and power system. Time-of-use electricity price and unit changeover cost are incorporated into the problem model. The economic objective is to minimize the total annual cost, which comprises the operation and investment costs of large-scale combined cooling, heat, and power system. The environmental objective is to minimize the annual global CO 2 emission of large-scale combined cooling, heat, and power system. The augmented ε–constraint method is applied to achieve the Pareto frontier of the design configuration, thereby reflecting the set of solutions that represent optimal trade-offs between the economic and environmental objectives. Sensitivity analysis is conducted to reflect the impact of natural gas price on the combined cooling, heat, and power system. The synthesis and design of combined cooling, heat, and power system for an airport in China is studied to test the proposed synthesis and design methodology. The Pareto curve of multi-objective optimization shows that the total annual cost varies from 102.53 to 94.59 M

  11. The Large-scale Coronal Structure of the 2017 August 21 Great American Eclipse: An Assessment of Solar Surface Flux Transport Model Enabled Predictions and Observations

    Science.gov (United States)

    Nandy, Dibyendu; Bhowmik, Prantika; Yeates, Anthony R.; Panda, Suman; Tarafder, Rajashik; Dash, Soumyaranjan

    2018-01-01

    On 2017 August 21, a total solar eclipse swept across the contiguous United States, providing excellent opportunities for diagnostics of the Sun’s corona. The Sun’s coronal structure is notoriously difficult to observe except during solar eclipses; thus, theoretical models must be relied upon for inferring the underlying magnetic structure of the Sun’s outer atmosphere. These models are necessary for understanding the role of magnetic fields in the heating of the corona to a million degrees and the generation of severe space weather. Here we present a methodology for predicting the structure of the coronal field based on model forward runs of a solar surface flux transport model, whose predicted surface field is utilized to extrapolate future coronal magnetic field structures. This prescription was applied to the 2017 August 21 solar eclipse. A post-eclipse analysis shows good agreement between model simulated and observed coronal structures and their locations on the limb. We demonstrate that slow changes in the Sun’s surface magnetic field distribution driven by long-term flux emergence and its evolution governs large-scale coronal structures with a (plausibly cycle-phase dependent) dynamical memory timescale on the order of a few solar rotations, opening up the possibility for large-scale, global corona predictions at least a month in advance.

  12. Large-Scale Water Resources Management within the Framework of GLOWA-Danube - Part A: The Groundwater Model

    Science.gov (United States)

    Barthel, R.; Rojanschi, V.; Wolf, J.; Braun, J.

    2003-04-01

    The interdisciplinary research co-operation Glowa-Danube aims at the development of innovative techniques, scenarios and strategies to investigate the impacts of Global Change on the hydrological cycle within the catchment area of the Upper Danube Basin (Gauge Passau). Both the influence of natural changes in the ecosystem, such as climate change, and changes in human behavior, such as changes in land use or water consumption, are considered. A globally applicable decision support tool "DANUBIA" that comprises 15 individual disciplinary models will be developed. The models are connected with each other via customized interfaces that facilitate network-based parallel calculations. The strictly object-oriented DANUBIA architecture was developed using the graphical notation tool UML (Unified Modeling Language) and has been implemented in Java code. The Institute of Hydraulic Engineering of the Universitaet Stuttgart contributes two models to DANUBIA: A groundwater flow and transport model and a water supply model. The latter is dealt with in a second contribution to this conference. This paper focuses on the groundwater model. The catchment basin of the Upper Danube covers an area of approximately 77.000 km2. The elevation difference from the highest peaks of the Alps to the lowest flatlands in the Danube valley is more than 3.000 m. In addition to the Alps, several lower mountain ranges such as the Black Forest, the Swabian and Franconian Alb and the Bavarian Forest are located respectively in the Northeast, North and Northwest of the basin. The climatic conditions, geomorphology, geology and land use show a wide range of different characteristics. The size and heterogeneity of the area make it extremely difficult to represent the natural conditions in a numerical model. Both data availability and accessibility add to the difficulties that one encounters in the approach to simulate groundwater flow and contaminant transport in this area. The groundwater flow model of

  13. Automatic Texture Mapping with AN Omnidirectional Camera Mounted on a Vehicle Towards Large Scale 3d City Models

    Science.gov (United States)

    Deng, F.; Li, D.; Yan, L.; Fan, H.

    2012-07-01

    Today high resolution panoramic images with competitive quality have been widely used for rendering in some commercial systems. However the potential applications such as mapping, augmented reality and modelling which need accurate orientation information are still poorly studied. Urban models can be quickly obtained from aerial images or LIDAR, however with limited quality or efficiency due to low resolution textures and manual texture mapping work flow. We combine an Extended Kalman Filter (EKF) with the traditional Structure from Motion (SFM) method without any prior information based on a general camera model which can handle various kinds of omnidirectional and other kind of single perspective image sequences even with unconnected or weakly connected frames. The orientation results is then applied to mapping the textures from panoramas to the existing building models obtained from aerial photogrammetry. It turns out to largely improve the quality of the models and the efficiency of the modelling procedure.

  14. Comparing large-scale hydrological model predictions with observed streamflow in the Pacific Northwest: effects of climate and groundwater

    Science.gov (United States)

    Mohammad Safeeq; Guillaume S. Mauger; Gordon E. Grant; Ivan Arismendi; Alan F. Hamlet; Se-Yeun Lee

    2014-01-01

    Assessing uncertainties in hydrologic models can improve accuracy in predicting future streamflow. Here, simulated streamflows using the Variable Infiltration Capacity (VIC) model at coarse (1/16°) and fine (1/120°) spatial resolutions were evaluated against observed streamflows from 217 watersheds. In...

  15. A large scale GIS geodatabase of soil parameters supporting the modeling of conservation practice alternatives in the United States

    Science.gov (United States)

    Water quality modeling requires across-scale support of combined digital soil elements and simulation parameters. This paper presents the unprecedented development of a large spatial scale (1:250,000) ArcGIS geodatabase coverage designed as a functional repository of soil-parameters for modeling an...

  16. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  17. Modeling kinetics of a large-scale fed-batch CHO cell culture by Markov chain Monte Carlo method.

    Science.gov (United States)

    Xing, Zizhuo; Bishop, Nikki; Leister, Kirk; Li, Zheng Jian

    2010-01-01

    Markov chain Monte Carlo (MCMC) method was applied to model kinetics of a fed-batch Chinese hamster ovary cell culture process in 5,000-L bioreactors. The kinetic model consists of six differential equations, which describe dynamics of viable cell density and concentrations of glucose, glutamine, ammonia, lactate, and the antibody fusion protein B1 (B1). The kinetic model has 18 parameters, six of which were calculated from the cell culture data, whereas the other 12 were estimated from a training data set that comprised of seven cell culture runs using a MCMC method. The model was confirmed in two validation data sets that represented a perturbation of the cell culture condition. The agreement between the predicted and measured values of both validation data sets may indicate high reliability of the model estimates. The kinetic model uniquely incorporated the ammonia removal and the exponential function of B1 protein concentration. The model indicated that ammonia and lactate play critical roles in cell growth and that low concentrations of glucose (0.17 mM) and glutamine (0.09 mM) in the cell culture medium may help reduce ammonia and lactate production. The model demonstrated that 83% of the glucose consumed was used for cell maintenance during the late phase of the cell cultures, whereas the maintenance coefficient for glutamine was negligible. Finally, the kinetic model suggests that it is critical for B1 production to sustain a high number of viable cells. The MCMC methodology may be a useful tool for modeling kinetics of a fed-batch mammalian cell culture process.

  18. Study of depression influencing factors with zero-inflated regression models in a large-scale population survey.

    Science.gov (United States)

    Xu, Tao; Zhu, Guangjin; Han, Shaomei

    2017-11-28

    The number of depression symptoms can be considered as count data in order to get complete and accurate analyses findings in studies of depression. This study aims to compare the goodness of fit of four count outcomes models by a large survey sample to identify the optimum model for a risk factor study of the number of depression symptoms. 15 820 subjects, aged 10 to 80 years old, who were not suffering from serious chronic diseases and had not run a high fever in the past 15 days, agreed to take part in this survey; 15 462 subjects completed all the survey scales. The number of depression symptoms was the sum of the 'positive' responses of seven depression questions. Four count outcomes models and a logistic model were constructed to identify the optimum model of the number of depression symptoms. The mean number of depression symptoms was 1.37±1.55. The over-dispersion test statistic O was 308.011. The alpha dispersion parameter was 0.475 (95% CI 0.443 to 0.508), which was significantly larger than 0. The Vuong test statistic Z was 6.782 and the P value was zero counts to be accounted for with traditional negative binomial distribution. The zero-inflated negative binomial (ZINB) model had the largest log likelihood and smallest AIC and BIC, suggesting best goodness of fit. In addition, predictive probabilities for many counts in the ZINB model fitted the observed counts best. All fitting test statistics and the predictive probability curve produced the same findings that the ZINB model was the best model for fitting the number of depression symptoms, assessing both the presence or absence of depression and its severity. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. The Oregon Model of Behavior Family Therapy: From Intervention Design to Promoting Large-Scale System Change.

    Science.gov (United States)

    Dishion, Thomas; Forgatch, Marion; Chamberlain, Patricia; Pelham, William E

    2016-11-01

    This paper reviews the evolution of the Oregon model of family behavior therapy over the past four decades. Inspired by basic research on family interaction and innovation in behavior change theory, a set of intervention strategies were developed that were effective for reducing multiple forms of problem behavior in children (e.g., Patterson, Chamberlain, & Reid, 1982). Over the ensuing decades, the behavior family therapy principles were applied and adapted to promote children's adjustment to address family formation and adaptation (Family Check-Up model), family disruption and maladaptation (Parent Management Training-Oregon model), and family attenuation and dissolution (Treatment Foster Care-Oregon model). We provide a brief overview of each intervention model and summarize randomized trials of intervention effectiveness. We review evidence on the viability of effective implementation, as well as barriers and solutions to adopting these evidence-based practices. We conclude by proposing an integrated family support system for the three models applied to the goal of reducing the prevalence of severe problem behavior, addiction, and mental problems for children and families, as well as reducing the need for costly and largely ineffective residential placements. Copyright © 2016. Published by Elsevier Ltd.

  20. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    Science.gov (United States)

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  1. The mechanism of saccade motor pattern generation investigated by a large-scale spiking neuron model of the superior colliculus.

    Directory of Open Access Journals (Sweden)

    Jan Morén

    Full Text Available The subcortical saccade-generating system consists of the retina, superior colliculus, cerebellum and brainstem motoneuron areas. The superior colliculus is the site of sensory-motor convergence within this basic visuomotor loop preserved throughout the vertebrates. While the system has been extensively studied, there are still several outstanding questions regarding how and where the saccade eye movement profile is generated and the contribution of respective parts within this system. Here we construct a spiking neuron model of the whole intermediate layer of the superior colliculus based on the latest anatomy and physiology data. The model consists of conductance-based spiking neurons with quasi-visual, burst, buildup, local inhibitory, and deep layer inhibitory neurons. The visual input is given from the superficial superior colliculus and the burst neurons send the output to the brainstem oculomotor nuclei. Gating input from the basal ganglia and an integral feedback from the reticular formation are also included.We implement the model in the NEST simulator and show that the activity profile of bursting neurons can be reproduced by a combination of NMDA-type and cholinergic excitatory synaptic inputs and integrative inhibitory feedback. The model shows that the spreading neural activity observed in vivo can keep track of the collicular output over time and reset the system at the end of a saccade through activation of deep layer inhibitory neurons. We identify the model parameters according to neural recording data and show that the resulting model recreates the saccade size-velocity curves known as the saccadic main sequence in behavioral studies. The present model is consistent with theories that the superior colliculus takes a principal role in generating the temporal profiles of saccadic eye movements, rather than just specifying the end points of eye movements.

  2. Attributing the behavior of low-level clouds in large-scale models to subgrid-scale parameterizations

    Science.gov (United States)

    Neggers, R. A. J.

    2015-12-01

    This study explores ways of establishing the characteristic behavior of boundary layer schemes in representing subtropical marine low-level clouds in climate models. To this purpose, parameterization schemes are studied in both isolated and interactive mode with the larger-scale circulation. Results of the EUCLIPSE/GASS intercomparison study for Single-Column Models (SCM) on low-level cloud transitions are compared to General Circulation Model (GCM) results from the CFMIP-2 project at selected grid points in the subtropical eastern Pacific. Low cloud characteristics are plotted as a function of key state variables for which Large-Eddy Simulation results suggest a distinct and reasonably tight relation. These include the Cloud Top Entrainment Instability (CTEI) parameter and the total cloud cover. SCM and GCM results are thus compared and their resemblance is quantified using simple metrics. Good agreement is reported, to such a degree that SCM results are found to be uniquely representative of their GCM, and vice versa. This suggests that the system of parameterized fast boundary layer physics dominates the model state at any given time, even when interactive with the larger-scale flow. This behavior can loosely be interpreted as a unique "fingerprint" of a boundary layer scheme, recognizable in both SCM and GCM simulations. The result justifies and advocates the use of SCM simulation for improving weather and climate models, including the attribution of typical responses of low clouds to climate change in a GCM to specific parameterizations.

  3. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  4. Simultaneous inference for multilevel linear mixed models - with an application to a large-scale school meal study

    DEFF Research Database (Denmark)

    Ritz, Christian; Laursen, Rikke Pilmann; Damsgaard, Camilla Trab

    2017-01-01

    of a school meal programme. We propose a novel and versatile framework for simultaneous inference on parameters estimated from linear mixed models that were fitted separately for several outcomes from the same study, but did not necessarily contain the same fixed or random effects. By combining asymptotic...... sizes of practical relevance we studied simultaneous coverage through simulation, which showed that the approach achieved acceptable coverage probabilities even for small sample sizes (10 clusters) and for 2–16 outcomes. The approach also compared favourably with a joint modelling approach. We also...

  5. Intelligent Adjustment of Printhead Driving Waveform Parameters for 3D Electronic Printing

    Directory of Open Access Journals (Sweden)

    Lin Na

    2017-01-01

    Full Text Available In practical applications of 3D electronic printing, a major challenge is to adjust the printhead for a high print resolution and accuracy. However, an exhausting manual selective process inevitably wastes a lot of time. Therefore, in this paper, we proposed a new intelligent adjustment method, which adopts artificial bee colony algorithm to optimize the printhead driving waveform parameters for getting the desired printhead state. Experimental results show that this method can quickly and accuracy find out the suitable combination of driving waveform parameters to meet the needs of applications.

  6. Large-scale external validation and comparison of prognostic models: an application to chronic obstructive pulmonary disease

    NARCIS (Netherlands)

    Guerra, Beniamino; Haile, Sarah R.; Lamprecht, Bernd; Ramírez, Ana S.; Martinez-Camblor, Pablo; Kaiser, Bernhard; Alfageme, Inmaculada; Almagro, Pere; Casanova, Ciro; Esteban-González, Cristóbal; Soler-Cataluña, Juan J.; de-Torres, Juan P.; Miravitlles, Marc; Celli, Bartolome R.; Marin, Jose M.; ter Riet, Gerben; Sobradillo, Patricia; Lange, Peter; Garcia-Aymerich, Judith; Antó, Josep M.; Turner, Alice M.; Han, MeiLan K.; Langhammer, Arnulf; Leivseth, Linda; Bakke, Per; Johannessen, Ane; Oga, Toru; Cosio, Borja; Ancochea-Bermúdez, Julio; Echazarreta, Andres; Roche, Nicolas; Burgel, Pierre-Régis; Sin, Don D.; Soriano, Joan B.; Puhan, Milo A.

    2018-01-01

    External validations and comparisons of prognostic models or scores are a prerequisite for their use in routine clinical care but are lacking in most medical fields including chronic obstructive pulmonary disease (COPD). Our aim was to externally validate and concurrently compare prognostic scores

  7. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

    Science.gov (United States)

    Grøn, Randi; Gerds, Thomas A; Andersen, Per K

    2016-03-30

    Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Delft-FEWS:A Decision Making Platform to Intergrate Data, Model, Algorithm for Large-Scale River Basin Water Management

    Science.gov (United States)

    Yang, T.; Welles, E.

    2017-12-01

    In this paper, we introduce a flood forecasting and decision making platform, named Delft-FEWS, which has been developed over years at the Delft Hydraulics and now at Deltares. The philosophy of Delft-FEWS is to provide water managers and operators with an open shell tool, which allows the integratation of a variety of hydrological, hydraulics, river routing, and reservoir models with hydrometerological forecasts data. Delft-FEWS serves as an powerful tool for both basin-scale and national-scale water resources management. The essential novelty of Delft-FEWS is to change the flood forecasting and water resources management from a single model or agency centric paradigm to a intergrated framework, in which different model, data, algorithm and stakeholders are strongly linked together. The paper will start with the challenges in water resources managment, and the concept and philosophy of Delft-FEWS. Then, the details of data handling and linkages of Delft-FEWS with different hydrological, hydraulic, and reservoir models, etc. Last, several cases studies and applications of Delft-FEWS will be demonstrated, including the National Weather Service and the Bonneville Power Administration in USA, and a national application in the water board in the Netherland.

  9. Large-scale modeled contemporary and future water temperature estimates for 10774 Midwestern U.S. Lakes

    Science.gov (United States)

    Winslow, Luke A.; Hansen, Gretchen J. A.; Read, Jordan S.; Notaro, Michael

    2017-04-01

    Climate change has already influenced lake temperatures globally, but understanding future change is challenging. The response of lakes to changing climate drivers is complex due to the nature of lake-atmosphere coupling, ice cover, and stratification. To better understand the diversity of lake responses to climate change and give managers insight on individual lakes, we modelled daily water temperature profiles for 10,774 lakes in Michigan, Minnesota, and Wisconsin for contemporary (1979-2015) and future (2020-2040 and 2080-2100) time periods with climate models based on the Representative Concentration Pathway 8.5, the worst-case emission scenario. In addition to lake-specific daily simulated temperatures, we derived commonly used, ecologically relevant annual metrics of thermal conditions for each lake. We include all supporting lake-specific model parameters, meteorological drivers, and archived code for the model and derived metric calculations. This unique dataset offers landscape-level insight into the impact of climate change on lakes.

  10. Large-scale modeled contemporary and future water temperature estimates for 10774 Midwestern U.S. Lakes.

    Science.gov (United States)

    Winslow, Luke A; Hansen, Gretchen J A; Read, Jordan S; Notaro, Michael

    2017-04-25

    Climate change has already influenced lake temperatures globally, but understanding future change is challenging. The response of lakes to changing climate drivers is complex due to the nature of lake-atmosphere coupling, ice cover, and stratification. To better understand the diversity of lake responses to climate change and give managers insight on individual lakes, we modelled daily water temperature profiles for 10,774 lakes in Michigan, Minnesota, and Wisconsin for contemporary (1979-2015) and future (2020-2040 and 2080-2100) time periods with climate models based on the Representative Concentration Pathway 8.5, the worst-case emission scenario. In addition to lake-specific daily simulated temperatures, we derived commonly used, ecologically relevant annual metrics of thermal conditions for each lake. We include all supporting lake-specific model parameters, meteorological drivers, and archived code for the model and derived metric calculations. This unique dataset offers landscape-level insight into the impact of climate change on lakes.

  11. New Insights About Large-Scale Delta Morphodynamics from a Coupled Model of Fluvial-Coastal Processes

    Science.gov (United States)

    Murray, A. B.; Ratliff, K. M.; Hutton, E.

    2017-12-01

    We use a newly developed delta model to explore the combined effects of sea-level rise (SLR) and variable wave influence on delta morphology, avulsion behavior, and autogenic sediment flux variability. Using the Community Surface Dynamics Modeling System framework and tools, we couple the River Avulsion and Floodplain Evolution Model (RAFEM) to the Coastline Evolution Model (CEM). RAFEM models the fluvial processes, including river profile evolution, floodplain deposition, and avulsions. CEM uses gradients in alongshore sediment transport to distribute the fluvial sediment along the coastline. A suite of recent experiments using the coupled model and the Dakota software toolkit lead to several new insights: 1) A preferential avulsion location (which scales with the backwater length) can arise for geometric reasons that are independent of the recently suggested importance of alternation between flood and inter-flood periods. 2) The angular distribution of waves, as well as the wave height, affect the avulsion timescale. Previous work suggested that the time between avulsions will increase with greater wave influence, and we find that this is true for an angular mix of waves that tends to smooth a fairly straight coastline (coastline diffusion), where river mouth progradation is slowed and avulsions are delayed. However, if the angular distribution of waves leads to locally smooth shorelines but large amplitude coastline features (anti-diffusive coastline evolution), then avulsion timescales are barely affected, even when wave influence is high. 3) Increasing SLR rates are expected to cause more frequent avulsions, and it does in laboratory deltas. Unexpectedly, we find that this is not the case for the river-dominated deltas in our coupled model, in which SLR-related transgression effectively decreases progradation, offsetting base-level-rise effects. This finding raises potentially important questions about the geometric differences between prototypical and

  12. Validation of the large-scale Lagrangian cirrus model CLaMS-Ice by in-situ measurements

    Science.gov (United States)

    Costa, Anja; Rolf, Christian; Grooß, Jens-Uwe; Afchine, Armin; Spelten, Nicole; Dreiling, Volker; Zöger, Martin; Krämer, Martina

    2015-04-01

    Cirrus clouds are an element of uncertainty in the climate system and have received increasing attention since the last IPCC reports. The interaction of varying freezing meachanisms, sedimentation rates, temperature and updraft velocity fluctuations and other factors that lead to the formation of those clouds is still not fully understood. During the ML-Cirrus campaign 2014 (Germany), the new cirrus cloud model CLaMS-Ice (see Rolf et al., EGU 2015) has been used for flight planning to direct the research aircraft HALO into interesting cirrus cloud regions. Now, after the campaign, we use our in-situ aircraft measurements to validate and improve this model - with the long-term goal to enable it to simulate cirrus cloud cover globally, with reasonable computing times and sufficient accuracy. CLaMS-Ice consists of a two-moment bulk model established by Spichtinger and Gierens (2009a, 2009b), which simulates cirrus clouds along trajectories that the Lagrangian model CLaMS (McKenna et al., 2002 and Konopka et al. 2007) derived from ECMWF data. The model output covers temperature, pressure, relative humidity, ice water content (IWC), and ice crystal numbers (Nice). These parameters were measured on board of HALO by the following instruments: temperature and pressure by BAHAMAS, total and gas phase water by the hygrometers FISH and SHARC (see Meyer et al 2014, submitted to ACP), and Nice as well as ice crystal size distributions by the cloud spectrometer NIXE-CAPS (see also Krämer et al., EGU 2015). Comparisons of the model results with the measurements yield that cirrus clouds can be successfully simulated by CLaMS-Ice. However, there are sections in which the model's relative humidity and Nice deviate considerably from the measured values. This can be traced back to e.g. the initialization of total water from ECMWF data. The simulations are therefore reinitiated with the total water content measured by FISH. Other possible sources of uncertainties are investigated, as

  13. Assessing the impact of large-scale computing on the size and complexity of first-principles electromagnetic models

    International Nuclear Information System (INIS)

    Miller, E.K.

    1990-01-01

    There is a growing need to determine the electromagnetic performance of increasingly complex systems at ever higher frequencies. The ideal approach would be some appropriate combination of measurement, analysis, and computation so that system design and assessment can be achieved to a needed degree of accuracy at some acceptable cost. Both measurement and computation benefit from the continuing growth in computer power that, since the early 1950s, has increased by a factor of more than a million in speed and storage. For example, a CRAY2 has an effective throughput (not the clock rate) of about 10 11 floating-point operations (FLOPs) per hour compared with the approximate 10 5 provided by the UNIVAC-1. The purpose of this discussion is to illustrate the computational complexity of modeling large (in wavelengths) electromagnetic problems. In particular the author makes the point that simply relying on faster computers for increasing the size and complexity of problems that can be modeled is less effective than might be anticipated from this raw increase in computer throughput. He suggests that rather than depending on faster computers alone, various analytical and numerical alternatives need development for reducing the overall FLOP count required to acquire the information desired. One approach is to decrease the operation count of the basic model computation itself, by reducing the order of the frequency dependence of the various numerical operations or their multiplying coefficients. Another is to decrease the number of model evaluations that are needed, an example being the number of frequency samples required to define a wideband response, by using an auxiliary model of the expected behavior. 11 refs., 5 figs., 2 tabs

  14. The Impact of Model Configuration and Large-Scale, Upper-Level Forcing on CRM-Simulated Convective Systems

    Science.gov (United States)

    Tao, W.-K.; Zeng, X.; Shie, C.-L.; Starr, D.; Simpson, J.

    2004-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D, see a brief review by Tao 2003). Only recently have 3D experiments been performed for multi-day periods for tropical cloud systems with large horizontal domains at the National Center for Atmospheric Research, at NOAA GFDL, at the U. K. Met. Office, at Colorado State University and at NASA Goddard Space Flight Center (Tao 2003). At Goddard, a 3D Goddard Cumulus Ensemble (GCE) model was used to simulate periods during TOGA COARE (December 19-27, 1992), GATE (September 1-7, 1974), SCSMEX (June 2-11, 1998), ARM (June 26-30, 1997) and KWAJEX (August 7-13, August 18-21, and August 29-September 12, 1999) using a 512 km domain (with 2-kilometer resolution). The results indicate that surface precipitation and latent heating profiles are similar between the 2D and 3D GCE model simulations. However, there are difference in radiation, surface fluxes and precipitation characteristics. The 2D GCE model was used to perform a long-term integration on ARM/GCSS case 4 (22 days at the ARM southern Great Plains site in March 2000). Preliminary results showed a large temperature bias in the upper troposphere that had not been seen in previous tropical cases. The major objectives of this paper are: (1) to determine the sensitivities to model configuration (ie., 2D in west-east, south-north or 3D), (2) to identify the differences and similarities in the organization and entrainment rates of convection between 2D- and 3D-simulated ARM cloud systems, and (3) assess the impact of upper tropospheric forcing on tropical and ARM case 4 cases.

  15. Development and application of large-scale hydrologic and aquatic carbon models to understand riverine CO2 evasion in Amazonia

    Science.gov (United States)

    Howard, E. A.; Coe, M. T.; Foley, J. A.; Costa, M. H.

    2004-12-01

    Many researchers are investigating the topic of CO2 efflux to the atmosphere from waters of the Amazon basin at several scales. We are developing a physically based modeling system to simulate this flux throughout the whole basin as a function of time-transient climate, vegetation, and hydrology. This modeling system includes an ecosystem land surface model (IBIS; Foley et al. 1996, Kucharik et al. 2000), a hydrological routing model (HYDRA; Coe 2000, Coe et al. 2002), and a new aquatic carbon processing module that we are incorporating into HYDRA (Howard et al. in prep). HYDRA has been recently modified to better represent river discharge and flood extent and height throughout the Amazon Basin. These modifications include: 1) using empirically derived equations representing stream width and height at flood initiation (Costa et al. 2002) to provide more accurate estimates of the initiation and cessation of flood conditions; and 2) using spatially explicit river sinuosity data (Costa et al. 2002) and a stream velocity function based on the Manning equation to provide more realistic representation of stream flow timing and magnitude. HYDRA has been calibrated and validated with observations of river discharge, water height, and flooded area at numerous locations in the mainstem and headwaters of the basin. Results of this validation show better agreement with observations than the previous version of HYDRA but also indicate the need for improved land surface topography and precipitation datasets. The aquatic carbon processing module prototype is currently implemented as an aspatial STELLA/textregistered model, decoupled from HYDRA, that simulates individual grid cells (at ˜ 9 km resolution). We drive the model with IBIS-derived hydrological inputs from the land, and with empirically derived estimates of C inputs (from CAMREX and LBA sources). To allow for seasonal fluctuations in the aquatic-terrestrial transition zone, for each timestep we simulate the volume of

  16. Mussel farming as a large-scale bioengineering tool: a numerical modelling case study in Rødsand lagoon, Denmark

    DEFF Research Database (Denmark)

    Forsberg, Pernille Louise; Ernstsen, Verner Brandbyge; Lumborg, Ulrik

    spill of sediment, which could increase the longshore sediment influx to Rødsand lagoon. Mussels can reduce the SSC in marine environments (Schröder et al., 2014), why the implementation of a mussel farm has been considered as a management option. In the present study we developed a module to include....... and Krost P. (2014). The impact of a mussel farm on water transparency in the Kiel Fjord. Ocean & Coastal Management, 101:42-52....... mussels as a bioengineering measure in a numerical sediment transport model and investigated how the implementation of an exterior mussel farm affect the sediment dynamics within Rødsand lagoon. On the basis of 2D modelling (MIKE21 by DHI) and field measurements, the flow and sediment dynamics to and from...

  17. Emotional modelling and classification of a large-scale collection of scene images in a cluster environment.

    Science.gov (United States)

    Cao, Jianfang; Li, Yanfei; Tian, Yun

    2018-01-01

    The development of network technology and the popularization of image capturing devices have led to a rapid increase in the number of digital images available, and it is becoming increasingly difficult to identify a desired image from among the massive number of possible images. Images usually contain rich semantic information, and people usually understand images at a high semantic level. Therefore, achieving the ability to use advanced technology to identify the emotional semantics contained in images to enable emotional semantic image classification remains an urgent issue in various industries. To this end, this study proposes an improved OCC emotion model that integrates personality and mood factors for emotional modelling to describe the emotional semantic information contained in an image. The proposed classification system integrates the k-Nearest Neighbour (KNN) algorithm with the Support Vector Machine (SVM) algorithm. The MapReduce parallel programming model was used to adapt the KNN-SVM algorithm for parallel implementation in the Hadoop cluster environment, thereby achieving emotional semantic understanding for the classification of a massive collection of images. For training and testing, 70,000 scene images were randomly selected from the SUN Database. The experimental results indicate that users with different personalities show overall consistency in their emotional understanding of the same image. For a training sample size of 50,000, the classification accuracies for different emotional categories targeted at users with different personalities were approximately 95%, and the training time was only 1/5 of that required for the corresponding algorithm with a single-node architecture. Furthermore, the speedup of the system also showed a linearly increasing tendency. Thus, the experiments achieved a good classification effect and can lay a foundation for classification in terms of additional types of emotional image semantics, thereby demonstrating

  18. Emotional modelling and classification of a large-scale collection of scene images in a cluster environment.

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    Full Text Available The development of network technology and the popularization of image capturing devices have led to a rapid increase in the number of digital images available, and it is becoming increasingly difficult to identify a desired image from among the massive number of possible images. Images usually contain rich semantic information, and people usually understand images at a high semantic level. Therefore, achieving the ability to use advanced technology to identify the emotional semantics contained in images to enable emotional semantic image classification remains an urgent issue in various industries. To this end, this study proposes an improved OCC emotion model that integrates personality and mood factors for emotional modelling to describe the emotional semantic information contained in an image. The proposed classification system integrates the k-Nearest Neighbour (KNN algorithm with the Support Vector Machine (SVM algorithm. The MapReduce parallel programming model was used to adapt the KNN-SVM algorithm for parallel implementation in the Hadoop cluster environment, thereby achieving emotional semantic understanding for the classification of a massive collection of images. For training and testing, 70,000 scene images were randomly selected from the SUN Database. The experimental results indicate that users with different personalities show overall consistency in their emotional understanding of the same image. For a training sample size of 50,000, the classification accuracies for different emotional categories targeted at users with different personalities were approximately 95%, and the training time was only 1/5 of that required for the corresponding algorithm with a single-node architecture. Furthermore, the speedup of the system also showed a linearly increasing tendency. Thus, the experiments achieved a good classification effect and can lay a foundation for classification in terms of additional types of emotional image semantics

  19. Soil Moisture Data Assimilation in a Hydrological Model: A Case Study in Belgium Using Large-Scale Satellite Data

    Directory of Open Access Journals (Sweden)

    Pierre Baguis

    2017-08-01

    Full Text Available In the present study, we focus on the assimilation of satellite observations for Surface Soil Moisture (SSM in a hydrological model. The satellite data are produced in the framework of the EUMETSAT project H-SAF and are based on measurements with the Advanced radar Scatterometer (ASCAT, embarked on the Meteorological Operational satellites (MetOp. The product generated with these measurements has a horizontal resolution of 25 km and represents the upper few centimeters of soil. Our approach is based on the Ensemble Kalman Filter technique (EnKF, where observation and model uncertainties are taken into account, implemented in a conceptual hydrological model. The analysis is carried out in the Demer catchment of the Scheldt River Basin in Belgium, for the period from June 2013–May 2016. In this context, two methodological advances are being proposed. First, the generation of stochastic terms, necessary for the EnKF, of bounded variables like SSM is addressed with the aid of specially-designed probability distributions, so that the bounds are never exceeded. Second, bias due to the assimilation procedure itself is removed using a post-processing technique. Subsequently, the impact of SSM assimilation on the simulated streamflow is estimated using a series of statistical measures based on the ensemble average. The differences from the control simulation are then assessed using a two-dimensional bootstrap sampling on the ensemble generated by the assimilation procedure. Our analysis shows that data assimilation combined with bias correction can improve the streamflow estimations or, at a minimum, produce results statistically indistinguishable from the control run of the hydrological model.

  20. Large-scale modeled contemporary and future water temperature estimates for 10774 Midwestern U.S. Lakes

    OpenAIRE

    Winslow, Luke A.; Hansen, Gretchen J.A.; Read, Jordan S; Notaro, Michael

    2017-01-01

    Climate change has already influenced lake temperatures globally, but understanding future change is challenging. The response of lakes to changing climate drivers is complex due to the nature of lake-atmosphere coupling, ice cover, and stratification. To better understand the diversity of lake responses to climate change and give managers insight on individual lakes, we modelled daily water temperature profiles for 10,774 lakes in Michigan, Minnesota, and Wisconsin for contemporary (1979?201...

  1. 3-D time-dependent numerical model of flow patterns within a large-scale Czochralski system

    Science.gov (United States)

    Nam, Phil-Ouk; O, Sang-Kun; Yi, Kyung-Woo

    2008-04-01

    Silicon single crystals grown through the Czochralski (Cz) method have increased in size to 300 mm, resulting in the use of larger crucibles. The objective of this study is to investigate the continuous Cz method in a large crucible (800 mm), which is performed by inserting a polycrystalline silicon rod into the melt. The numerical model is based on a time-dependent and three-dimensional standard k- ɛ turbulent model using the analytical software package CFD-ACE+, version 2007. Wood's metal melt, which has a low melting point ( Tm=70 °C), was used as the modeling fluid. Crystal rotation given in the clockwise direction with rotation rates varying from 0 to 15 rpm, while the crucible was rotated counter-clockwise, with rotation rates between 0 and 3 rpm. The results show that asymmetrical phenomena of fluid flow arise as results of crystal and crucible rotation, and that these phenomena move with the passage of time. Near the crystal, the flow moves towards the crucible at the pole of the asymmetrical phenomena. Away from the poles, a vortex begins to form, which is strongly pronounced in the region between the poles.

  2. USER FRIENDLY OPEN GIS TOOL FOR LARGE SCALE DATA ASSIMILATION – A CASE STUDY OF HYDROLOGICAL MODELLING

    Directory of Open Access Journals (Sweden)

    P. K. Gupta

    2012-08-01

    Full Text Available Open source software (OSS coding has tremendous advantages over proprietary software. These are primarily fuelled by high level programming languages (JAVA, C++, Python etc... and open source geospatial libraries (GDAL/OGR, GEOS, GeoTools etc.. Quantum GIS (QGIS is a popular open source GIS package, which is licensed under GNU GPL and is written in C++. It allows users to perform specialised tasks by creating plugins in C++ and Python. This research article emphasises on exploiting this capability of QGIS to build and implement plugins across multiple platforms using the easy to learn – Python programming language. In the present study, a tool has been developed to assimilate large spatio-temporal datasets such as national level gridded rainfall, temperature, topographic (digital elevation model, slope, aspect, landuse/landcover and multi-layer soil data for input into hydrological models. At present this tool has been developed for Indian sub-continent. An attempt is also made to use popular scientific and numerical libraries to create custom applications for digital inclusion. In the hydrological modelling calibration and validation are important steps which are repetitively carried out for the same study region. As such the developed tool will be user friendly and used efficiently for these repetitive processes by reducing the time required for data management and handling. Moreover, it was found that the developed tool can easily assimilate large dataset in an organised manner.

  3. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales

    KAUST Repository

    Yuan, Yuan

    2017-12-28

    Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.

  4. Soil carbon management in large-scale Earth system modelling: implications for crop yields and nitrogen leaching

    Directory of Open Access Journals (Sweden)

    S. Olin

    2015-11-01

    levels, assessment of how these different services will vary in space and time, especially in response to cropland management, are scarce. We explore cropland management alternatives and the effect these can have on future C and N pools and fluxes using the land-use-enabled dynamic vegetation model LPJ-GUESS (Lund–Potsdam–Jena General Ecosystem Simulator. Simulated crop production, cropland carbon storage, carbon sequestration and nitrogen leaching from croplands are evaluated and discussed. Compared to the version of LPJ-GUESS that does not include land-use dynamics, estimates of soil carbon stocks and nitrogen leaching from terrestrial to aquatic ecosystems were improved. Our model experiments allow us to investigate trade-offs between these ecosystem services that can be provided from agricultural fields. These trade-offs are evaluated for current land use and climate and further explored for future conditions within the two future climate change scenarios, RCP (Representative Concentration Pathway 2.6 and 8.5. Our results show that the potential for carbon sequestration due to typical cropland management practices such as no-till management and cover crops proposed in previous studies is not realised, globally or over larger climatic regions. Our results highlight important considerations to be made when modelling C–N interactions in agricultural ecosystems under future environmental change and the effects these have on terrestrial biogeochemical cycles.

  5. Constructing a large-scale 3D Geologic Model for Analysis of the Non-Proliferation Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Wagoner, J; Myers, S

    2008-04-09

    We have constructed a regional 3D geologic model of the southern Great Basin, in support of a seismic wave propagation investigation of the 1993 Nonproliferation Experiment (NPE) at the Nevada Test Site (NTS). The model is centered on the NPE and spans longitude -119.5{sup o} to -112.6{sup o} and latitude 34.5{sup o} to 39.8{sup o}; the depth ranges from the topographic surface to 150 km below sea level. The model includes the southern half of Nevada, as well as parts of eastern California, western Utah, and a portion of northwestern Arizona. The upper crust is constrained by both geologic and geophysical studies, while the lower crust and upper mantle are constrained by geophysical studies. The mapped upper crustal geologic units are Quaternary basin fill, Tertiary deposits, pre-Tertiary deposits, intrusive rocks of all ages, and calderas. The lower crust and upper mantle are parameterized with 5 layers, including the Moho. Detailed geologic data, including surface maps, borehole data, and geophysical surveys, were used to define the geology at the NTS. Digital geologic outcrop data were available for both Nevada and Arizona, whereas geologic maps for California and Utah were scanned and hand-digitized. Published gravity data (2km spacing) were used to determine the thickness of the Cenozoic deposits and thus estimate the depth of the basins. The free surface is based on a 10m lateral resolution DEM at the NTS and a 90m lateral resolution DEM elsewhere. Variations in crustal thickness are based on receiver function analysis and a framework compilation of reflection/refraction studies. We used Earthvision (Dynamic Graphics, Inc.) to integrate the geologic and geophysical information into a model of x,y,z,p nodes, where p is a unique integer index value representing the geologic unit. For seismic studies, the geologic units are mapped to specific seismic velocities. The gross geophysical structure of the crust and upper mantle is taken from regional surface

  6. A Study of Parameters of the Counterpropagating Leader and its Influence on the Lightning Protection of Objects Using Large-Scale Laboratory Modeling

    Science.gov (United States)

    Syssoev, V. S.; Kostinskiy, A. Yu.; Makalskiy, L. M.; Rakov, A. V.; Andreev, M. G.; Bulatov, M. U.; Sukharevsky, D. I.; Naumova, M. U.

    2014-04-01

    In this work, the results of experiments on initiating the upward and descending leaders during the development of a long spark when studying lightning protection of objects with the help of large-scale models are shown. The influence of the counterpropagating leaders on the process of the lightning strike of ground-based and insulated objects is discussed. In the first case, the upward negative leader is initiated by the positive downward leader, which propagates from the high-voltage electrode of the "rod-rod"-type Marx generator (the rod is located on the plane and is 3-m high) in the gap with a length of 9-12 m. The positive-voltage pulse with a duration of 7500 μs had an amplitude of up to 3 MV. In the second case, initiation of the positive upward leader was performed in the electric field created by a cloud of negatively charged aerosol, which simulates the charged thunderstorm cell. In this case, all the phases characteristic of the ascending lightnings initiated by the tall ground-based objects and the triggered lightnings during the experiments with an actual thunderstorm cloud were observed in the forming spark discharge with a length of 1.5-2.0 m. The main parameters of the counterpropagating leader, which is initiated by the objects during the large-scale model experiments with a long spark, are shown.

  7. Memory as the "whole brain work": a large-scale model based on "oscillations in super-synergy".

    Science.gov (United States)

    Başar, Erol

    2005-01-01

    According to recent trends, memory depends on several brain structures working in concert across many levels of neural organization; "memory is a constant work-in progress." The proposition of a brain theory based on super-synergy in neural populations is most pertinent for the understanding of this constant work in progress. This report introduces a new model on memory basing on the processes of EEG oscillations and Brain Dynamics. This model is shaped by the following conceptual and experimental steps: 1. The machineries of super-synergy in the whole brain are responsible for formation of sensory-cognitive percepts. 2. The expression "dynamic memory" is used for memory processes that evoke relevant changes in alpha, gamma, theta and delta activities. The concerted action of distributed multiple oscillatory processes provides a major key for understanding of distributed memory. It comprehends also the phyletic memory and reflexes. 3. The evolving memory, which incorporates reciprocal actions or reverberations in the APLR alliance and during working memory processes, is especially emphasized. 4. A new model related to "hierarchy of memories as a continuum" is introduced. 5. The notions of "longer activated memory" and "persistent memory" are proposed instead of long-term memory. 6. The new analysis to recognize faces emphasizes the importance of EEG oscillations in neurophysiology and Gestalt analysis. 7. The proposed basic framework called "Memory in the Whole Brain Work" emphasizes that memory and all brain functions are inseparable and are acting as a "whole" in the whole brain. 8. The role of genetic factors is fundamental in living system settings and oscillations and accordingly in memory, according to recent publications. 9. A link from the "whole brain" to "whole body," and incorporation of vegetative and neurological system, is proposed, EEG oscillations and ultraslow oscillations being a control parameter.

  8. The effects of spatial heterogeneity and subsurface lateral transfer on evapotranspiration estimates in large scale Earth system models

    Science.gov (United States)

    Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.

    2017-12-01

    Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.

  9. Modelling of the large scale redox front evolution in an open pit uranium mine in Pocos de Caldas, Brazil

    International Nuclear Information System (INIS)

    Romero, L.; Moreno, L.; Neretnieks, I.

    1991-01-01

    In an open pit uranium mine at Pocos de Caldas in Brazil, the upper portions of the rock have been oxidized by infiltrating oxidizing groundwater. The redox front is very uneven and fingering is in evidence to depths ranging down to several hundred meters. The redox fingers are found in fractures and fractures zones. An attempt has been made to model the development of such redox fingerings along flow channels and to relate the structure to independent observations of flow channels in other crystalline rocks. 5 figs., 1 tab., 8 refs

  10. Modeling the large-scale effects of surface moisture heterogeneity on wetland carbon fluxes in the West Siberian Lowland

    Directory of Open Access Journals (Sweden)

    T. J. Bohn

    2013-10-01

    Full Text Available We used a process-based model to examine the role of spatial heterogeneity of surface and sub-surface water on the carbon budget of the wetlands of the West Siberian Lowland over the period 1948–2010. We found that, while surface heterogeneity (fractional saturated area had little overall effect on estimates of the region's carbon fluxes, sub-surface heterogeneity (spatial variations in water table depth played an important role in both the overall magnitude and spatial distribution of estimates of the region's carbon fluxes. In particular, to reproduce the spatial pattern of CH4 emissions recorded by intensive in situ observations across the domain, in which very little CH4 is emitted north of 60° N, it was necessary to (a account for CH4 emissions from unsaturated wetlands and (b use spatially varying methane model parameters that reduced estimated CH4 emissions in the northern (permafrost half of the domain (and/or account for lower CH4 emissions under inundated conditions. Our results suggest that previous estimates of the response of these wetlands to thawing permafrost may have overestimated future increases in methane emissions in the permafrost zone.

  11. Combined biogeophysical and biogeochemical effects of large-scale forest cover changes in the MPI earth system model

    Directory of Open Access Journals (Sweden)

    S. Bathiany

    2010-05-01

    Full Text Available Afforestation and reforestation have become popular instruments of climate mitigation policy, as forests are known to store large quantities of carbon. However, they also modify the fluxes of energy, water and momentum at the land surface. Previous studies have shown that these biogeophysical effects can counteract the carbon drawdown and, in boreal latitudes, even overcompensate it due to large albedo differences between forest canopy and snow. This study investigates the role forest cover plays for global climate by conducting deforestation and afforestation experiments with the earth system model of the Max Planck Institute for Meteorology (MPI-ESM. Complete deforestation of the tropics (18.75° S–15° N exerts a global warming of 0.4 °C due to an increase in CO2 concentration by initially 60 ppm and a decrease in evapotranspiration in the deforested areas. In the northern latitudes (45° N–90° N, complete deforestation exerts a global cooling of 0.25 °C after 100 years, while afforestation leads to an equally large warming, despite the counteracting changes in CO2 concentration. Earlier model studies are qualitatively confirmed by these findings. As the response of temperature as well as terrestrial carbon pools is not of equal sign at every land cell, considering forests as cooling in the tropics and warming in high latitudes seems to be true only for the spatial mean, but not on a local scale.

  12. Modeling of transport processes through large-scale discrete fracture networks using conforming meshes and open-source software

    Science.gov (United States)

    Ngo, Tri Dat; Fourno, André; Noetinger, Benoit

    2017-11-01

    Most industrial and field studies of transport processes in Discrete Fracture Networks (DFNs) involve strong simplifying assumptions, especially at the meshing stage. High-accuracy simulations are therefore required for validating these simplified models and their domain of validity. The present paper proposes an efficient workflow based on open-source software to obtain transport simulations. High-quality computational meshes for DFNs are first generated using the conforming meshing approach FraC. Then, a tracer transport model implemented in the open-source code DuMux is used for simulating tracer transport driven by the advection-dispersion equation. We adopt the box method, a vertex-centered finite volume scheme for spatial discretization, which ensures concentration continuity and mass conservation at intersections between fractures. Numerical results on simple networks for validation purposes and on complex realistic DFNs are presented. An a-posteriori convergence study of the discretization method shows an order of convergence O(h) for tracer concentration with h the mesh size.

  13. A Nonnegative Latent Factor Model for Large-Scale Sparse Matrices in Recommender Systems via Alternating Direction Method.

    Science.gov (United States)

    Luo, Xin; Zhou, MengChu; Li, Shuai; You, Zhuhong; Xia, Yunni; Zhu, Qingsheng

    2016-03-01

    Nonnegative matrix factorization (NMF)-based models possess fine representativeness of a target matrix, which is critically important in collaborative filtering (CF)-based recommender systems. However, current NMF-based CF recommenders suffer from the problem of high computational and storage complexity, as well as slow convergence rate, which prevents them from industrial usage in context of big data. To address these issues, this paper proposes an alternating direction method (ADM)-based nonnegative latent factor (ANLF) model. The main idea is to implement the ADM-based optimization with regard to each single feature, to obtain high convergence rate as well as low complexity. Both computational and storage costs of ANLF are linear with the size of given data in the target matrix, which ensures high efficiency when dealing with extremely sparse matrices usually seen in CF problems. As demonstrated by the experiments on large, real data sets, ANLF also ensures fast convergence and high prediction accuracy, as well as the maintenance of nonnegativity constraints. Moreover, it is simple and easy to implement for real applications of learning systems.

  14. Evaluating alternative systems of peer review: a large-scale agent-based modelling approach to scientific publication.

    Science.gov (United States)

    Kovanis, Michail; Trinquart, Ludovic; Ravaud, Philippe; Porcher, Raphaël

    2017-01-01

    The debate on whether the peer-review system is in crisis has been heated recently. A variety of alternative systems have been proposed to improve the system and make it sustainable. However, we lack sufficient evidence and data related to these issues. Here we used a previously developed agent-based model of the scientific publication and peer-review system calibrated with empirical data to compare the efficiency of five alternative peer-review systems with the conventional system. We modelled two systems of immediate publication, with and without online reviews (crowdsourcing), a system with only one round of reviews and revisions allowed (re-review opt-out) and two review-sharing systems in which rejected manuscripts are resubmitted along with their past reviews to any other journal (portable) or to only those of the same publisher but of lower impact factor (cascade). The review-sharing systems outperformed or matched the performance of the conventional one in all peer-review efficiency, reviewer effort and scientific dissemination metrics we used. The systems especially showed a large decrease in total time of the peer-review process and total time devoted by reviewers to complete all reports in a year. The two systems with immediate publication released more scientific information than the conventional one but provided almost no other benefit. Re-review opt-out decreased the time reviewers devoted to peer review but had lower performance on screening papers that should not be published and relative increase in intrinsic quality of papers due to peer review than the conventional system. Sensitivity analyses showed consistent findings to those from our main simulations. We recommend prioritizing a system of review-sharing to create a sustainable scientific publication and peer-review system.

  15. Evolution and application of a pseudo-multi-zone model for the prediction of NOx emissions from large-scale diesel engines at various operating conditions

    International Nuclear Information System (INIS)

    Savva, Nicholas S.; Hountalas, Dimitrios T.

    2014-01-01

    Highlights: • Development of a simplified simulation model for NO x formation during combustion. • Application of the proposed model on large-scale two and four-stroke diesel engines. • Experimental data from stationary and ship main and auxiliary engines were used. • The model captures the trend of NO x as engine power and fuel injection timing varies. • The model is recommended for research and practical use in maritime and power industry. - Abstract: Emissions regulations for heavy-duty diesel units used in maritime and power generation applications have become very strict the last years. Hence, the industry is enforced to limit specific gaseous and particulate emissions (NO x , SO x , CO x , PM and HC) depending on the regulations. Among numerous methods, simulation models are extensively used to support the development of techniques used for the control of emitted pollutants. This is very important for large-scale engines due to the extremely high cost of the experimental investigation resulting from the size of the engines and the test equipment involved. Beyond this, simulation models can also be used to support NO x monitoring, since on-board verification techniques are to become mandatory for the marine industry in the near future. Last but not least, simulation models can also be used for model-based control applications to support the operation of both in-cylinder and after-treatment techniques. Currently, the major controlled pollutant for both marine and stationary applications is NO x . For this reason, in the present work, authors focus on the development and application of a simplified NO x model with special emphasis on its ability to predict the effect of operating conditions on NO x for both two and four-stroke diesel engines. To accomplish this, an existing well validated simplified NO x model has been modified to enhance its physical background and applied on 16 different large-scale diesel engines utilizing 18 different sets of

  16. Ranking transmission projects in large scale systems using an AC power flow model; Priorizacao de obras em sistemas de grande porte usando um modelo AC da rede

    Energy Technology Data Exchange (ETDEWEB)

    Melo, A.C.G. [Centro de Pesquisas de Energia Eletrica (CEPEL), Rio de Janeiro, RJ (Brazil); Fontoura Filho, R.N. [ELETROBRAS, Rio de Janeiro, RJ (Brazil); Peres, L.A.P. Pecorelli [FURNAS, Rio de Janeiro, RJ (Brazil); Morozowski Filho, M. [Santa Catarina Univ., Florianopolis, SC (Brazil)

    1994-12-31

    Initially, this paper summarizes the approach developed by the Brazilian Planning Criteria Working Group (GTCP/ELETROBRAS) for identifying which subset of transmission investments should be postponed to meet a pre-stablished budget constraint with the least possible impact on system performance. Next, this paper presents the main features of the computational model PRIO, which allows the application of the ranking process to large scale power systems (2,000 buses and 3,000 circuits), with as many as 100 projects to be ranked. In this model, the adequacy analysis of each system state is carried out through an AC power flow coupled to a successive linear programming based remedial actions model. Case studies with the IEEE-RTS system and a configuration of the Brazilian Southeastern are presented and discussed. (author) 7 refs., 6 figs., 5 tabs.

  17. A stochastic thermostat algorithm for coarse-grained thermomechanical modeling of large-scale soft matters: Theory and application to microfilaments

    Energy Technology Data Exchange (ETDEWEB)

    Li, Tong; Gu, YuanTong, E-mail: yuantong.gu@qut.edu.au

    2014-04-15

    As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.

  18. A stochastic thermostat algorithm for coarse-grained thermomechanical modeling of large-scale soft matters: Theory and application to microfilaments

    Science.gov (United States)

    Li, Tong; Gu, YuanTong

    2014-04-01

    As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.

  19. Simulated convective systems using a cloud resolving model: Impact of large-scale temperature and moisture forcing using observations and GEOS-3 reanalysis

    Science.gov (United States)

    Shie, C.-L.; Tao, W.-K.; Hou, A.; Lin, X.

    2006-01-01

    The GCE (Goddard Cumulus Ensemble) model, which has been developed and improved at NASA Goddard Space Flight Center over the past two decades, is considered as one of the finer and state-of-the-art CRMs (Cloud Resolving Models) in the research community. As the chosen CRM for a NASA Interdisciplinary Science (IDS) Project, GCE has recently been successfully upgraded into an MPI (Message Passing Interface) version with which great improvement has been achieved in computational efficiency, scalability, and portability. By basically using the large-scale temperature and moisture advective forcing, as well as the temperature, water vapor and wind fields obtained from TRMM (Tropical Rainfall Measuring Mission) field experiments such as SCSMEX (South China Sea Monsoon Experiment) and KWAJEX (Kwajalein Experiment), our recent 2-D and 3-D GCE simulations were able to capture detailed convective systems typical of the targeted (simulated) regions. The GEOS-3 [Goddard EOS (Earth Observing System) Version-3] reanalysis data have also been proposed and successfully implemented for usage in the proposed/performed GCE long-term simulations (i.e., aiming at producing massive simulated cloud data -- Cloud Library) in compensating the scarcity of real field experimental data in both time and space (location). Preliminary 2-D or 3-D pilot results using GEOS-3 data have generally showed good qualitative agreement (yet some quantitative difference) with the respective numerical results using the SCSMEX observations. The first objective of this paper is to ensure the GEOS-3 data quality by comparing the model results obtained from several pairs of simulations using the real observations and GEOS-3 reanalysis data. The different large-scale advective forcing obtained from these two kinds of resources (i.e., sounding observations and GEOS-3 reanalysis) has been considered as a major critical factor in producing various model results. The second objective of this paper is therefore to

  20. Spatial validation of large scale land surface models against monthly land surface temperature patterns using innovative performance metrics.

    Science.gov (United States)

    Koch, Julian; Siemann, Amanda; Stisen, Simon; Sheffield, Justin

    2016-04-01

    Land surface models (LSMs) are a key tool to enhance process understanding and to provide predictions of the terrestrial hydrosphere and its atmospheric coupling. Distributed LSMs predict hydrological states and fluxes, such as land surface temperature (LST) or actual evapotranspiration (aET), at each grid cell. LST observations are widely available through satellite remote sensing platforms that enable comprehensive spatial validations of LSMs. In spite of the availability of LST data, most validation studies rely on simple cell to cell comparisons and thus do not regard true spatial pattern information. This study features two innovative spatial performance metrics, namely EOF- and connectivity-analysis, to validate predicted LST patterns by three LSMs (Mosaic, Noah, VIC) over the contiguous USA. The LST validation dataset is derived from global High-Resolution-Infrared-Radiometric-Sounder (HIRS) retrievals for a 30 year period. The metrics are bias insensitive, which is an important feature in order to truly validate spatial patterns. The EOF analysis evaluates the spatial variability and pattern seasonality, and attests better performance to VIC in the warm months and to Mosaic and Noah in the cold months. Further, more than 75% of the LST variability can be captured by a single pattern that is strongly driven by air temperature. The connectivity analysis assesses the homogeneity and smoothness of patterns. The LSMs are most reliable at predicting cold LST patterns in the warm months and vice versa. Lastly, the coupling between aET and LST is investigated at flux tower sites and compared against LSMs to explain the identified LST shortcomings.

  1. Large-scale collection and annotation of full-length enriched cDNAs from a model halophyte, Thellungiella halophila

    Directory of Open Access Journals (Sweden)

    Seki Motoaki

    2008-11-01

    Full Text Available Abstract Background Thellungiella halophila (also known as Thellungiella salsuginea is a model halophyte with a small plant size, short life cycle, and small genome. It easily undergoes genetic transformation by the floral dipping method used with its close relative, Arabidopsis thaliana. Thellungiella genes exhibit high sequence identity (approximately 90% at the cDNA level with Arabidopsis genes. Furthermore, Thellungiella not only shows tolerance to extreme salinity stress, but also to chilling, freezing, and ozone stress, supporting the use of Thellungiella as a good genomic resource in studies of abiotic stress tolerance. Results We constructed a full-length enriched Thellungiella (Shan Dong ecotype cDNA library from various tissues and whole plants subjected to environmental stresses, including high salinity, chilling, freezing, and abscisic acid treatment. We randomly selected about 20 000 clones and sequenced them from both ends to obtain a total of 35 171 sequences. CAP3 software was used to assemble the sequences and cluster them into 9569 nonredundant cDNA groups. We named these cDNAs "RTFL" (RIKEN Thellungiella Full-Length cDNAs. Information on functional domains and Gene Ontology (GO terms for the RTFL cDNAs were obtained using InterPro. The 8289 genes assigned to InterPro IDs were classified according to the GO terms using Plant GO Slim. Categorical comparison between the whole Arabidopsis genome and Thellungiella genes showing low identity to Arabidopsis genes revealed that the population of Thellungiella transport genes is approximately 1.5 times the size of the corresponding Arabidopsis genes. This suggests that these genes regulate a unique ion transportation system in Thellungiella. Conclusion As the number of Thellungiella halophila (Thellungiella salsuginea expressed sequence tags (ESTs was 9388 in July 2008, the number of ESTs has increased to approximately four times the original value as a result of this effort. Our

  2. Large-scale external validation and comparison of prognostic models: an application to chronic obstructive pulmonary disease.

    Science.gov (United States)

    Guerra, Beniamino; Haile, Sarah R; Lamprecht, Bernd; Ramírez, Ana S; Martinez-Camblor, Pablo; Kaiser, Bernhard; Alfageme, Inmaculada; Almagro, Pere; Casanova, Ciro; Esteban-González, Cristóbal; Soler-Cataluña, Juan J; de-Torres, Juan P; Miravitlles, Marc; Celli, Bartolome R; Marin, Jose M; Ter Riet, Gerben; Sobradillo, Patricia; Lange, Peter; Garcia-Aymerich, Judith; Antó, Josep M; Turner, Alice M; Han, Meilan K; Langhammer, Arnulf; Leivseth, Linda; Bakke, Per; Johannessen, Ane; Oga, Toru; Cosio, Borja; Ancochea-Bermúdez, Julio; Echazarreta, Andres; Roche, Nicolas; Burgel, Pierre-Régis; Sin, Don D; Soriano, Joan B; Puhan, Milo A

    2018-03-02

    External validations and comparisons of prognostic models or scores are a prerequisite for their use in routine clinical care but are lacking in most medical fields including chronic obstructive pulmonary disease (COPD). Our aim was to externally validate and concurrently compare prognostic scores for 3-year all-cause mortality in mostly multimorbid patients with COPD. We relied on 24 cohort studies of the COPD Cohorts Collaborative International Assessment consortium, corresponding to primary, secondary, and tertiary care in Europe, the Americas, and Japan. These studies include globally 15,762 patients with COPD (1871 deaths and 42,203 person years of follow-up). We used network meta-analysis adapted to multiple score comparison (MSC), following a frequentist two-stage approach; thus, we were able to compare all scores in a single analytical framework accounting for correlations among scores within cohorts. We assessed transitivity, heterogeneity, and inconsistency and provided a performance ranking of the prognostic scores. Depending on data availability, between two and nine prognostic scores could be calculated for each cohort. The BODE score (body mass index, airflow obstruction, dyspnea, and exercise capacity) had a median area under the curve (AUC) of 0.679 [1st quartile-3rd quartile = 0.655-0.733] across cohorts. The ADO score (age, dyspnea, and airflow obstruction) showed the best performance for predicting mortality (difference AUC ADO - AUC BODE = 0.015 [95% confidence interval (CI) = -0.002 to 0.032]; p = 0.08) followed by the updated BODE (AUC BODE updated - AUC BODE = 0.008 [95% CI = -0.005 to +0.022]; p = 0.23). The assumption of transitivity was not violated. Heterogeneity across direct comparisons was small, and we did not identify any local or global inconsistency. Our analyses showed best discriminatory performance for the ADO and updated BODE scores in patients with COPD. A limitation to be addressed in future studies is the extension of MSC

  3. Large-Scale Control of the Probability Distribution Function of Precipitation over the Continental US in Observations and Models, in the Current and Future Climat

    Science.gov (United States)

    Straus, D. M.

    2016-12-01

    The goals of this research are to: (a) identify features of the probability distribution function (pdf) of pentad precipitation over the continental US (CONUS) that are controlled by the configuration of the large-scale fields, including both tails of the pdf, hence droughts and floods, and the overall shape of the pdf, e.g. skewness and kurtosis; (b) estimate the changes in the properties of the pdf controlled by the large-scale in a future climate. We first describe the significant dependence of the observed precipitation pdf conditioned on circulation regimes over CONUS. The regime states, and the number of regimes, are obtained by a method that assures a high degree of significance, and a high degree of pattern correlation between the states in a regime and its average. The regime-conditioned pdfs yield information on times scales from intra-seasonal to inter-annual. We then apply this method to atmospheric simulations run with the EC-Earth version 3 model for historical sea-surface temperatures (SST) and future (RCP8.5 CMIP5 scenario) estimates of SST, at resolutions T255 and T799, to understand what dynamically controlled changes in the precipitation pdf can be expected in a future climate.

  4. Using memory-efficient algorithm for large-scale time-domain modeling of surface plasmon polaritons propagation in organic light emitting diodes

    Science.gov (United States)

    Zakirov, Andrey; Belousov, Sergei; Valuev, Ilya; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2017-10-01

    We demonstrate an efficient approach to numerical modeling of optical properties of large-scale structures with typical dimensions much greater than the wavelength of light. For this purpose, we use the finite-difference time-domain (FDTD) method enhanced with a memory efficient Locally Recursive non-Locally Asynchronous (LRnLA) algorithm called DiamondTorre and implemented for General Purpose Graphical Processing Units (GPGPU) architecture. We apply our approach to simulation of optical properties of organic light emitting diodes (OLEDs), which is an essential step in the process of designing OLEDs with improved efficiency. Specifically, we consider a problem of excitation and propagation of surface plasmon polaritons (SPPs) in a typical OLED, which is a challenging task given that SPP decay length can be about two orders of magnitude greater than the wavelength of excitation. We show that with our approach it is possible to extend the simulated volume size sufficiently so that SPP decay dynamics is accounted for. We further consider an OLED with periodically corrugated metallic cathode and show how the SPP decay length can be greatly reduced due to scattering off the corrugation. Ultimately, we compare the performance of our algorithm to the conventional FDTD and demonstrate that our approach can efficiently be used for large-scale FDTD simulations with the use of only a single GPGPU-powered workstation, which is not practically feasible with the conventional FDTD.

  5. An Evaluation of the Large-Scale Implementation of Ocean Thermal Energy Conversion (OTEC Using an Ocean General Circulation Model with Low-Complexity Atmospheric Feedback Effects

    Directory of Open Access Journals (Sweden)

    Yanli Jia

    2018-01-01

    Full Text Available Previous investigations of the large-scale deployment of Ocean Thermal Energy Conversions (OTEC systems are extended by allowing some atmospheric feedback in an ocean general circulation model. A modified ocean-atmosphere thermal boundary condition is used where relaxation corresponds to atmospheric longwave radiation to space, and an additional term expresses horizontal atmospheric transport. This produces lower steady-state OTEC power maxima (8 to 10.2 TW instead of 14.1 TW for global OTEC scenarios, and 7.2 to 9.3 TW instead of 11.9 TW for OTEC implementation within 100 km of coastlines. When power production peaks, power intensity remains practically unchanged, at 0.2 TW per Sverdrup of OTEC deep cold seawater, suggesting a similar degradation of the OTEC thermal resource. Large-scale environmental effects include surface cooling in low latitudes and warming elsewhere, with a net heat intake within the water column. These changes develop rapidly from the propagation of Kelvin and Rossby waves, and ocean current advection. Two deep circulation cells are generated in the Atlantic and Indo-Pacific basins. The Atlantic Meridional Overturning Circulation (AMOC is reinforced while an AMOC-like feature appears in the North Pacific, with deep convective winter events at high latitudes. Transport between the Indo-Pacific and the Southern Ocean is strengthened, with impacts on the Atlantic via the Antarctic Circumpolar Current (ACC.

  6. Using a Large-scale Neural Model of Cortical Object Processing to Investigate the Neural Substrate for Managing Multiple Items in Short-term Memory.

    Science.gov (United States)

    Liu, Qin; Ulloa, Antonio; Horwitz, Barry

    2017-11-01

    Many cognitive and computational models have been proposed to help understand working memory. In this article, we present a simulation study of cortical processing of visual objects during several working memory tasks using an extended version of a previously constructed large-scale neural model [Tagamets, M. A., & Horwitz, B. Integrating electrophysiological and anatomical experimental data to create a large-scale model that simulates a delayed match-to-sample human brain imaging study. Cerebral Cortex, 8, 310-320, 1998]. The original model consisted of arrays of Wilson-Cowan type of neuronal populations representing primary and secondary visual cortices, inferotemporal (IT) cortex, and pFC. We added a module representing entorhinal cortex, which functions as a gating module. We successfully implemented multiple working memory tasks using the same model and produced neuronal patterns in visual cortex, IT cortex, and pFC that match experimental findings. These working memory tasks can include distractor stimuli or can require that multiple items be retained in mind during a delay period (Sternberg's task). Besides electrophysiology data and behavioral data, we also generated fMRI BOLD time series from our simulation. Our results support the involvement of IT cortex in working memory maintenance and suggest the cortical architecture underlying the neural mechanisms mediating particular working memory tasks. Furthermore, we noticed that, during simulations of memorizing a list of objects, the first and last items in the sequence were recalled best, which may implicate the neural mechanism behind this important psychological effect (i.e., the primacy and recency effect).

  7. Why we need a large-scale open metadata initiative in health informatics - a vision paper on open data models for clinical phenotypes.

    Science.gov (United States)

    Dugas, Martin

    2013-01-01

    Clinical phenotypes are very complex and not well described. For instance, more than 100.000 biomedical concepts are needed to describe clinical properties of patients. At present, information systems dealing with clinical phenotype data are based on secret, heterogeneous and incompatible data models. This is the root cause for the well-known grand challenge of semantic interoperability in healthcare: data exchange and analysis of medical information systems has major limitations. This problem slows down medical progressand wastes time of health care professionals. A large-scale open metadata initiative can foster exchange, discussion and consensus regarding data models for clinical phenotypes. This would be an important contribution to improve information systems in healthcare and to solve the grand challenge of semantic interoperability.

  8. Large-scale monitoring of shorebird populations using count data and N-mixture models: Black Oystercatcher (Haematopus bachmani) surveys by land and sea

    Science.gov (United States)

    Lyons, James E.; Andrew, Royle J.; Thomas, Susan M.; Elliott-Smith, Elise; Evenson, Joseph R.; Kelly, Elizabeth G.; Milner, Ruth L.; Nysewander, David R.; Andres, Brad A.

    2012-01-01

    Large-scale monitoring of bird populations is often based on count data collected across spatial scales that may include multiple physiographic regions and habitat types. Monitoring at large spatial scales may require multiple survey platforms (e.g., from boats and land when monitoring coastal species) and multiple survey methods. It becomes especially important to explicitly account for detection probability when analyzing count data that have been collected using multiple survey platforms or methods. We evaluated a new analytical framework, N-mixture models, to estimate actual abundance while accounting for multiple detection biases. During May 2006, we made repeated counts of Black Oystercatchers (Haematopus bachmani) from boats in the Puget Sound area of Washington (n = 55 sites) and from land along the coast of Oregon (n = 56 sites). We used a Bayesian analysis of N-mixture models to (1) assess detection probability as a function of environmental and survey covariates and (2) estimate total Black Oystercatcher abundance during the breeding season in the two regions. Probability of detecting individuals during boat-based surveys was 0.75 (95% credible interval: 0.42–0.91) and was not influenced by tidal stage. Detection probability from surveys conducted on foot was 0.68 (0.39–0.90); the latter was not influenced by fog, wind, or number of observers but was ~35% lower during rain. The estimated population size was 321 birds (262–511) in Washington and 311 (276–382) in Oregon. N-mixture models provide a flexible framework for modeling count data and covariates in large-scale bird monitoring programs designed to understand population change.

  9. Modeling large-scale adoption of intercropping as a sustainable agricultural practice for food security and air pollution mitigation around the globe

    Science.gov (United States)

    Fung, K. M.; Tai, A. P. K.; Yong, T.; Liu, X.

    2017-12-01

    The fast-growing world population will impose a severe pressure on our current global food production system. Meanwhile, boosting crop yield by increasing fertilizer use comes with a cascade of environmental problems including air pollution. In China, agricultural activities contribute to 95% of total ammonia emissions. Such emissions are attributable to 20% of the fine particulate matter (PM2.5) formed in the downwind regions, which imposes severe health risks to the citizens. Field studies of soybean intercropping have demonstrated its potential to enhance crop yield, lower fertilizer use, and thus reduce ammonia emissions by taking advantage of legume nitrogen fixation and enabling mutualistic crop-crop interactions between legumes and non-legume crops. In our work, we revise the process-based biogeochemical model, DeNitrification-DeComposition (DNDC) to capture the belowground interactions of intercropped crops and show that with intercropping, only 58% of fertilizer is required to yield the same maize production of its monoculture counterpart, corresponding to a reduction in ammonia emission by 43% over China. Using the GEOS-Chem global 3-D chemical transport model, we estimate that such ammonia reduction can lessen downwind inorganic PM2.5 by up to 2.1% (equivalent to 1.3 μg m-3), which saves the Chinese air pollution-related health costs by up to US$1.5 billion each year. With the more enhanced crop growth and land management algorithms in the Community Land Model (CLM), we also implement into CLM the new parametrization of the belowground interactions to simulate large-scale adoption of intercropping around the globe and study their beneficial effects on food production, fertilizer usage and ammonia reduction. This study can serve as a scientific basis for policy makers and intergovernmental organizations to consider promoting large-scale intercropping to maintain a sustainable global food supply to secure both future crop production and air quality.

  10. The health system and population health implications of large-scale diabetes screening in India: a microsimulation model of alternative approaches.

    Directory of Open Access Journals (Sweden)

    Sanjay Basu

    2015-05-01

    Full Text Available Like a growing number of rapidly developing countries, India has begun to develop a system for large-scale community-based screening for diabetes. We sought to identify the implications of using alternative screening instruments to detect people with undiagnosed type 2 diabetes among diverse populations across India.We developed and validated a microsimulation model that incorporated data from 58 studies from across the country into a nationally representative sample of Indians aged 25-65 y old. We estimated the diagnostic and health system implications of three major survey-based screening instruments and random glucometer-based screening. Of the 567 million Indians eligible for screening, depending on which of four screening approaches is utilized, between 158 and 306 million would be expected to screen as "high risk" for type 2 diabetes, and be referred for confirmatory testing. Between 26 million and 37 million of these people would be expected to meet international diagnostic criteria for diabetes, but between 126 million and 273 million would be "false positives." The ratio of false positives to true positives varied from 3.9 (when using random glucose screening to 8.2 (when using a survey-based screening instrument in our model. The cost per case found would be expected to be from US$5.28 (when using random glucose screening to US$17.06 (when using a survey-based screening instrument, presenting a total cost of between US$169 and US$567 million. The major limitation of our analysis is its dependence on published cohort studies that are unlikely fully to capture the poorest and most rural areas of the country. Because these areas are thought to have the lowest diabetes prevalence, this may result in overestimation of the efficacy and health benefits of screening.Large-scale community-based screening is anticipated to produce a large number of false-positive results, particularly if using currently available survey-based screening

  11. An aggregate model of grid-connected, large-scale, offshore wind farm for power stability investigations-importance of windmill mechanical system

    DEFF Research Database (Denmark)

    Akhmatov, Vladislav; Knudsen, H.

    2002-01-01

    . Because the shaft system gives a soft coupling between the rotating wind turbine and the induction generator, the large-scale wind farm cannot always be reduced to one-machine equivalent and use of multi-machine equivalents will be necessary for reaching accuracy of the investigation results....... This will be in cases with irregular wind distribution over the wind farm area. The torsion mode of the shaft systems of large wind turbines is commonly in the range of 1-2 Hz and close to typical values of the electric power grid eigenfrequencies why there is a risk of oscillation between the wind turbines...... and the entire network. All these phenomena are different compared to previous experiences with modelling of conventional power plants with synchronous generators and stiff shaft systems....

  12. The impact of changes in parameterizations of surface drag and vertical diffusion on the large-scale circulation in the Community Atmosphere Model (CAM5)

    Science.gov (United States)

    Lindvall, Jenny; Svensson, Gunilla; Caballero, Rodrigo

    2017-06-01

    Simulations with the Community Atmosphere Model version 5 (CAM5) are used to analyze the sensitivity of the large-scale circulation to changes in parameterizations of orographic surface drag and vertical diffusion. Many GCMs and NWP models use enhanced turbulent mixing in stable conditions to improve simulations, while CAM5 cuts off all turbulence at high stabilities and instead employs a strong orographic surface stress parameterization, known as turbulent mountain stress (TMS). TMS completely dominates the surface stress over land and reduces the near-surface wind speeds compared to simulations without TMS. It is found that TMS is generally beneficial for the large-scale circulation as it improves zonal wind speeds, Arctic sea level pressure and zonal anomalies of the 500-hPa stream function, compared to ERA-Interim. It also alleviates atmospheric blocking frequency biases in the Northern Hemisphere. Using a scheme that instead allows for a modest increase of turbulent diffusion at higher stabilities only in the planetary boundary layer (PBL) appears to in some aspects have a similar, although much smaller, beneficial effect as TMS. Enhanced mixing throughout the atmospheric column, however, degrades the CAM5 simulation. Evaluating the simulations in comparison with detailed measurements at two locations reveals that TMS is detrimental for the PBL at the flat grassland ARM Southern Great Plains site, giving too strong wind turning and too deep PBLs. At the Sodankylä forest site, the effect of TMS is smaller due to the larger local vegetation roughness. At both sites, all simulations substantially overestimate the boundary layer ageostrophic flow.

  13. A pioneering healthcare model applying large-scale production concepts: Principles and performance after more than 11,000 transplants at Hospital do Rim

    Directory of Open Access Journals (Sweden)

    José Medina Pestana

    Full Text Available Summary The kidney transplant program at Hospital do Rim (hrim is a unique healthcare model that applies the same principles of repetition of processes used in industrial production. This model, devised by Frederick Taylor, is founded on principles of scientific management that involve planning, rational execution of work, and distribution of responsibilities. The expected result is increased efficiency, improvement of results and optimization of resources. This model, almost completely subsidized by the Unified Health System (SUS, in the Portuguese acronym, has been used at the hrim in more than 11,000 transplants over the last 18 years. The hrim model consists of eight interconnected modules: organ procurement organization, preparation for the transplant, admission for transplant, surgical procedure, post-operative period, outpatient clinic, support units, and coordination and quality control. The flow of medical activities enables organized and systematic care of all patients. The improvement of the activities in each module is constant, with full monitoring of various administrative, health care, and performance indicators. The continuous improvement in clinical results confirms the efficiency of the program. Between 1998 and 2015, an increase was noted in graft survival (77.4 vs. 90.4%, p<0.001 and patient survival (90.5 vs. 95.1%, p=0.001. The high productivity, efficiency, and progressive improvement of the results obtained with this model suggest that it could be applied to other therapeutic areas that require large-scale care, preserving the humanistic characteristic of providing health care activity.

  14. Coupled three-layer model for turbulent flow over large-scale roughness: On the hydrodynamics of boulder-bed streams

    Science.gov (United States)

    Pan, Wen-hao; Liu, Shi-he; Huang, Li

    2018-02-01

    This study developed a three-layer velocity model for turbulent flow over large-scale roughness. Through theoretical analysis, this model coupled both surface and subsurface flow. Flume experiments with flat cobble bed were conducted to examine the theoretical model. Results show that both the turbulent flow field and the total flow characteristics are quite different from that in the low gradient flow over microscale roughness. The velocity profile in a shallow stream converges to the logarithmic law away from the bed, while inflecting over the roughness layer to the non-zero subsurface flow. The velocity fluctuations close to a cobble bed are different from that of a sand bed, and it indicates no sufficiently large peak velocity. The total flow energy loss deviates significantly from the 1/7 power law equation when the relative flow depth is shallow. Both the coupled model and experiments indicate non-negligible subsurface flow that accounts for a considerable proportion of the total flow. By including the subsurface flow, the coupled model is able to predict a wider range of velocity profiles and total flow energy loss coefficients when compared with existing equations.

  15. A pioneering healthcare model applying large-scale production concepts: Principles and performance after more than 11,000 transplants at Hospital do Rim.

    Science.gov (United States)

    Pestana, José Medina

    2016-10-01

    The kidney transplant program at Hospital do Rim (hrim) is a unique healthcare model that applies the same principles of repetition of processes used in industrial production. This model, devised by Frederick Taylor, is founded on principles of scientific management that involve planning, rational execution of work, and distribution of responsibilities. The expected result is increased efficiency, improvement of results and optimization of resources. This model, almost completely subsidized by the Unified Health System (SUS, in the Portuguese acronym), has been used at the hrim in more than 11,000 transplants over the last 18 years. The hrim model consists of eight interconnected modules: organ procurement organization, preparation for the transplant, admission for transplant, surgical procedure, post-operative period, outpatient clinic, support units, and coordination and quality control. The flow of medical activities enables organized and systematic care of all patients. The improvement of the activities in each module is constant, with full monitoring of various administrative, health care, and performance indicators. The continuous improvement in clinical results confirms the efficiency of the program. Between 1998 and 2015, an increase was noted in graft survival (77.4 vs. 90.4%, p<0.001) and patient survival (90.5 vs. 95.1%, p=0.001). The high productivity, efficiency, and progressive improvement of the results obtained with this model suggest that it could be applied to other therapeutic areas that require large-scale care, preserving the humanistic characteristic of providing health care activity.

  16. A measurement strategy and an error-compensation model for the on-machine laser measurement of large-scale free-form surfaces

    International Nuclear Information System (INIS)

    Li, Bin; Li, Feng; Liu, Hongqi; Cai, Hui; Mao, Xinyong; Peng, Fangyu

    2014-01-01

    This study presents a novel measurement strategy and an error-compensation model for the measurement of large-scale free-form surfaces in on-machine laser measurement systems. To improve the measurement accuracy, the effects of the scan depth, surface roughness, incident angle and azimuth angle on the measurement results were investigated experimentally, and a practical measurement strategy considering the position and orientation of the sensor is presented. Also, a semi-quantitative model based on geometrical optics is proposed to compensate for the measurement error associated with the incident angle. The normal vector of the measurement point is determined using a cross-curve method from the acquired surface data. Then, the azimuth angle and incident angle are calculated to inform the measurement strategy and error-compensation model, respectively. The measurement strategy and error-compensation model are verified through the measurement of a large propeller blade on a heavy machine tool in a factory environment. The results demonstrate that the strategy and the model are effective in increasing the measurement accuracy. (paper)

  17. Observed and CMIP5 modeled influence of large-scale circulation on summer precipitation and drought in the South-Central United States

    Science.gov (United States)

    Ryu, Jung-Hee; Hayhoe, Katharine

    2017-12-01

    Annual precipitation in the largely agricultural South-Central United States is characterized by a primary wet season in May and June, a mid-summer dry period in July and August, and a second precipitation peak in September and October. Of the 22 CMIP5 global climate models with sufficient output available, 16 are able to reproduce this bimodal distribution (we refer to these as "BM" models), while 6 have trouble simulating the mid-summer dry period, instead producing an extended wet season ("EW" models). In BM models, the timing and amplitude of the mid-summer westward extension of the North Atlantic Subtropical High (NASH) are realistic, while the magnitude of the Great Plains Lower Level Jet (GPLLJ) tends to be overestimated, particularly in July. In EW models, temporal variations and geophysical locations of the NASH and GPLLJ appear reasonable compared to reanalysis but their magnitudes are too weak to suppress mid-summer precipitation. During warm-season droughts, however, both groups of models reproduce the observed tendency towards a stronger NASH that remains over the region through September, and an intensification and northward extension of the GPLLJ. Similarly, future simulations from both model groups under a +1 to +3 °C transient increase in global mean temperature show decreases in summer precipitation concurrent with an enhanced NASH and an intensified GPLLJ, though models differ regarding the months in which these decreases are projected to occur: early summer in the BM models, and late summer in the EW models. Overall, these results suggest that projected future decreases in summer precipitation over the South-Central region appear to be closely related to anomalous patterns of large-scale circulation already observed and modeled during historical dry years, patterns that are consistently reproduced by CMIP5 models.

  18. Large scale and cloud-based multi-model analytics experiments on climate change data in the Earth System Grid Federation

    Science.gov (United States)

    Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni

    2017-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final

  19. Large-scale microfluidics providing high-resolution and high-throughput screening of Caenorhabditis elegans poly-glutamine aggregation model

    Science.gov (United States)

    Mondal, Sudip; Hegarty, Evan; Martin, Chris; Gökçe, Sertan Kutal; Ghorashian, Navid; Ben-Yakar, Adela

    2016-10-01

    Next generation drug screening could benefit greatly from in vivo studies, using small animal models such as Caenorhabditis elegans for hit identification and lead optimization. Current in vivo assays can operate either at low throughput with high resolution or with low resolution at high throughput. To enable both high-throughput and high-resolution imaging of C. elegans, we developed an automated microfluidic platform. This platform can image 15 z-stacks of ~4,000 C. elegans from 96 different populations using a large-scale chip with a micron resolution in 16 min. Using this platform, we screened ~100,000 animals of the poly-glutamine aggregation model on 25 chips. We tested the efficacy of ~1,000 FDA-approved drugs in improving the aggregation phenotype of the model and identified four confirmed hits. This robust platform now enables high-content screening of various C. elegans disease models at the speed and cost of in vitro cell-based assays.

  20. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such ...

  1. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  2. Large-scale atmospheric circulation biases and changes in global climate model simulations and their importance for climate change in Central Europe

    Directory of Open Access Journals (Sweden)

    A. P. van Ulden

    2006-01-01

    Full Text Available The quality of global sea level pressure patterns has been assessed for simulations by 23 coupled climate models. Most models showed high pattern correlations. With respect to the explained spatial variance, many models showed serious large-scale deficiencies, especially at mid-latitudes. Five models performed well at all latitudes and for each month of the year. Three models had a reasonable skill. We selected the five models with the best pressure patterns for a more detailed assessment of their simulations of the climate in Central Europe. We analysed observations and simulations of monthly mean geostrophic flow indices and of monthly mean temperature and precipitation. We used three geostrophic flow indices: the west component and south component of the geostrophic wind at the surface and the geostrophic vorticity. We found that circulation biases were important, and affected precipitation in particular. Apart from these circulation biases, the models showed other biases in temperature and precipitation, which were for some models larger than the circulation induced biases. For the 21st century the five models simulated quite different changes in circulation, precipitation and temperature. Precipitation changes appear to be primarily caused by circulation changes. Since the models show widely different circulation changes, especially in late summer, precipitation changes vary widely between the models as well. Some models simulate severe drying in late summer, while one model simulates significant precipitation increases in late summer. With respect to the mean temperature the circulation changes were important, but not dominant. However, changes in the distribution of monthly mean temperatures, do show large indirect influences of circulation changes. Especially in late summer, two models simulate very strong warming of warm months, which can be attributed to severe summer drying in the simulations by these models. The models differ also

  3. Large-scale drivers of malaria and priority areas for prevention and control in the Brazilian Amazon region using a novel multi-pathogen geospatial model.

    Science.gov (United States)

    Valle, Denis; Lima, Joanna M Tucker

    2014-11-20

    Most of the malaria burden in the Americas is concentrated in the Brazilian Amazon but a detailed spatial characterization of malaria risk has yet to be undertaken. Utilizing 2004-2008 malaria incidence data collected from six Brazilian Amazon states, large-scale spatial patterns of malaria risk were characterized with a novel Bayesian multi-pathogen geospatial model. Data included 2.4 million malaria cases spread across 3.6 million sq km. Remotely sensed variables (deforestation rate, forest cover, rainfall, dry season length, and proximity to large water bodies), socio-economic variables (rural population size, income, and literacy rate, mortality rate for children age under five, and migration patterns), and GIS variables (proximity to roads, hydro-electric dams and gold mining operations) were incorporated as covariates. Borrowing information across pathogens allowed for better spatial predictions of malaria caused by Plasmodium falciparum, as evidenced by a ten-fold cross-validation. Malaria incidence for both Plasmodium vivax and P. falciparum tended to be higher in areas with greater forest cover. Proximity to gold mining operations was another important risk factor, corroborated by a positive association between migration rates and malaria incidence. Finally, areas with a longer dry season and areas with higher average rural income tended to have higher malaria risk. Risk maps reveal striking spatial heterogeneity in malaria risk across the region, yet these mean disease risk surface maps can be misleading if uncertainty is ignored. By combining mean spatial predictions with their associated uncertainty, several sites were consistently classified as hotspots, suggesting their importance as priority areas for malaria prevention and control. This article provides several contributions. From a methodological perspective, the benefits of jointly modelling multiple pathogens for spatial predictions were illustrated. In addition, maps of mean disease risk were

  4. Genetic diversity and ecological niche modelling of wild barley: refugia, large-scale post-LGM range expansion and limited mid-future climate threats?

    Science.gov (United States)

    Russell, Joanne; van Zonneveld, Maarten; Dawson, Ian K; Booth, Allan; Waugh, Robbie; Steffenson, Brian

    2014-01-01

    Describing genetic diversity in wild barley (Hordeum vulgare ssp. spontaneum) in geographic and environmental space in the context of current, past and potential future climates is important for conservation and for breeding the domesticated crop (Hordeum vulgare ssp. vulgare). Spatial genetic diversity in wild barley was revealed by both nuclear- (2,505 SNP, 24 nSSR) and chloroplast-derived (5 cpSSR) markers in 256 widely-sampled geo-referenced accessions. Results were compared with MaxEnt-modelled geographic distributions under current, past (Last Glacial Maximum, LGM) and mid-term future (anthropogenic scenario A2, the 2080s) climates. Comparisons suggest large-scale post-LGM range expansion in Central Asia and relatively small, but statistically significant, reductions in range-wide genetic diversity under future climate. Our analyses support the utility of ecological niche modelling for locating genetic diversity hotspots and determine priority geographic areas for wild barley conservation under anthropogenic climate change. Similar research on other cereal crop progenitors could play an important role in tailoring conservation and crop improvement strategies to support future human food security.

  5. Genetic diversity and ecological niche modelling of wild barley: refugia, large-scale post-LGM range expansion and limited mid-future climate threats?

    Directory of Open Access Journals (Sweden)

    Joanne Russell

    Full Text Available Describing genetic diversity in wild barley (Hordeum vulgare ssp. spontaneum in geographic and environmental space in the context of current, past and potential future climates is important for conservation and for breeding the domesticated crop (Hordeum vulgare ssp. vulgare. Spatial genetic diversity in wild barley was revealed by both nuclear- (2,505 SNP, 24 nSSR and chloroplast-derived (5 cpSSR markers in 256 widely-sampled geo-referenced accessions. Results were compared with MaxEnt-modelled geographic distributions under current, past (Last Glacial Maximum, LGM and mid-term future (anthropogenic scenario A2, the 2080s climates. Comparisons suggest large-scale post-LGM range expansion in Central Asia and relatively small, but statistically significant, reductions in range-wide genetic diversity under future climate. Our analyses support the utility of ecological niche modelling for locating genetic diversity hotspots and determine priority geographic areas for wild barley conservation under anthropogenic climate change. Similar research on other cereal crop progenitors could play an important role in tailoring conservation and crop improvement strategies to support future human food security.

  6. Study of Zero-Inflated Regression Models in a Large-Scale Population Survey of Sub-Health Status and Its Influencing Factors.

    Science.gov (United States)

    Xu, Tao; Zhu, Guang-Jin; Han, Shao-Mei

    2017-12-30

    Objective Sub-health status has progressively gained more attention from both medical professionals and the publics. Treating the number of sub-health symptoms as count data rather than dichotomous data helps to completely and accurately analyze findings in sub-healthy population. This study aims to compare the goodness of fit for count outcome models to identify the optimum model for sub-health study. Methods The sample of the study derived from a large-scale population survey on physiological and psychological constants from 2007 to 2011 in 4 provinces and 2 autonomous regions in China. We constructed four count outcome models using SAS: Poisson model, negative binomial (NB) model, zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model. The number of sub-health symptoms was used as the main outcome measure. The alpha dispersion parameter and O test were used to identify over-dispersed data, and Vuong test was used to evaluate the excessive zero count. The goodness of fit of regression models were determined by predictive probability curves and statistics of likelihood ratio test. Results Of all 78 307 respondents, 38.53% reported no sub-health symptoms. The mean number of sub-health symptoms was 2.98, and the standard deviation was 3.72. The statistic O in over-dispersion test was 720.995 (Palpha was 0.618 (95% CI: 0.600-0.636) comparing ZINB model and ZIP model; Vuong test statistic Z was 45.487. These results indicated over-dispersion of the data and excessive zero counts in this sub-health study. ZINB model had the largest log likelihood (-167 519), the smallest Akaike's Information Criterion coefficient (335 112) and the smallest Bayesian information criterion coefficient (335455), indicating its best goodness of fit. The predictive probabilities for most counts in ZINB model fitted the observed counts best. The logit section of ZINB model analysis showed that age, sex, occupation, smoking, alcohol drinking, ethnicity and obesity

  7. Large Scale Glazed Concrete Panels

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    .synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www...... try to develop new aesthetic potentials for the concrete, in large scales that has not been seen before in the ceramic area. It is expected to result in new types of large scale and very thin, glazed concrete façades in building. If such are introduced in an architectural context as exposed surfaces...

  8. Medial prefrontal and anterior insular connectivity in early schizophrenia and major depressive disorder: a resting functional MRI evaluation of large-scale brain network models

    Directory of Open Access Journals (Sweden)

    Jacob ePenner

    2016-03-01

    Full Text Available Anomalies in the medial prefrontal cortex, anterior insulae and large-scale brain networks associated with them have been proposed to underlie the pathophysiology of schizophrenia and major depressive disorder (MDD. In this study, we examined the connectivity of the medial prefrontal cortices and anterior insulae in 24 healthy controls, 24 patients with schizophrenia, and 24 patients with MDD early in illness with seed-based, 3 Tesla resting state functional MRI analysis using Statistical Probability Mapping. As hypothesized, reduced connectivity was found between the medial prefrontal cortex and the dorsal anterior cingulate cortex and other nodes associated with directed effort in patients with schizophrenia compared to controls while patients with MDD had reduced connectivity between the medial prefrontal cortex and ventral prefrontal emotional encoding regions compared to controls. Reduced connectivity was found between the anterior insulae and the medial prefrontal cortex in schizophrenia compared to controls, but contrary to some models emotion processing regions failed to demonstrate increased connectivity with the medial prefrontal cortex in MDD compared to controls. Although not statistically significant after correction for multiple comparisons, patients with schizophrenia tended to demonstrate decreased connectivity between basal ganglia-thalamocortical regions and the medial prefrontal cortex compared to patients with MDD, which might be expected as these regions effect action. Results were interpreted to support anomalies in nodes associated with directed effort in schizophrenia and nodes associated with emotional encoding network in MDD compared to healthy controls.

  9. Hydrological projections under climate change in the near future by RegCM4 in Southern Africa using a large-scale hydrological model

    Science.gov (United States)

    Li, Lu; Diallo, Ismaïla; Xu, Chong-Yu; Stordal, Frode

    2015-09-01

    This study aims to provide model estimates of changes in hydrological elements, such as EvapoTranspiration (ET) and runoff, in Southern Africa in the near future until 2029. The climate change scenarios are projected by a high-resolution Regional Climate Model (RCM), RegCM4, which is the latest version of this model developed by the Abdus Salam International Centre for Theoretical Physics (ICTP). The hydrological projections are performed by using a large-scale hydrological model (WASMOD-D), which has been tested and customized on this region prior to this study. The results reveal that (1) the projected temperature shows an increasing tendency over Southern Africa in the near future, especially eastward of 25°E, while the precipitation changes are varying between different months and sub-regions; (2) an increase in runoff (and ET) was found in eastern part of Southern Africa, i.e. Southern Mozambique and Malawi, while a decrease was estimated across the driest region in a wide area encompassing Kalahari Desert, Namibia, southwest of South Africa and Angola; (3) the strongest climate change signals are found over humid tropical areas, i.e. north of Angola and Malawi and south of Dem Rep of Congo; and (4) large spatial and temporal variability of climate change signals is found in the near future over Southern Africa. This study presents the main results of work-package 2 (WP2) of the 'Socioeconomic Consequences of Climate Change in Sub-equatorial Africa (SoCoCA)' project, which is funded by the Research Council of Norway.

  10. 76 FR 51055 - In the Matter of Certain Inkjet Ink Cartridges with Printheads and Components Thereof; Notice of...

    Science.gov (United States)

    2011-08-17

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-723] In the Matter of Certain Inkjet Ink... importation of certain inkjet ink cartridges with printheads and components thereof by reason of infringement... importation of the accused inkjet ink cartridges with printheads and components thereof. Regarding...

  11. 75 FR 36442 - In the Matter of Certain Inkjet Ink Cartridges With Printheads and Components Thereof; Notice of...

    Science.gov (United States)

    2010-06-25

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-723] In the Matter of Certain Inkjet Ink... States after importation of certain inkjet ink cartridges with printheads and components thereof by... importation of certain inkjet ink cartridges with printheads or components thereof that infringe one or more...

  12. 75 FR 36677 - In the Matter of Certain Inkjet Ink Cartridges With Printheads and Components Thereof; Notice of...

    Science.gov (United States)

    2010-06-28

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-711] In the Matter of Certain Inkjet Ink Cartridges With Printheads and Components Thereof; Notice of a Commission Determination Not To Review an... importation of certain inkjet ink cartridges with printheads and components thereof by reason of infringement...

  13. Assimilating in situ and radar altimetry data into a large-scale hydrologic-hydrodynamic model for streamflow forecast in the Amazon

    Directory of Open Access Journals (Sweden)

    R. C. D. Paiva

    2013-07-01

    Full Text Available In this work, we introduce and evaluate a data assimilation framework for gauged and radar altimetry-based discharge and water levels applied to a large scale hydrologic-hydrodynamic model for stream flow forecasts over the Amazon River basin. We used the process-based hydrological model called MGB-IPH coupled with a river hydrodynamic module using a storage model for floodplains. The Ensemble Kalman Filter technique was used to assimilate information from hundreds of gauging and altimetry stations based on ENVISAT satellite data. Model state variables errors were generated by corrupting precipitation forcing, considering log-normally distributed, time and spatially correlated errors. The EnKF performed well when assimilating in situ discharge, by improving model estimates at the assimilation sites (change in root-mean-squared error Δrms = −49% and also transferring information to ungauged rivers reaches (Δrms = −16%. Altimetry data assimilation improves results, in terms of water levels (Δrms = −44% and discharges (Δrms = −15% to a minor degree, mostly close to altimetry sites and at a daily basis, even though radar altimetry data has a low temporal resolution. Sensitivity tests highlighted the importance of the magnitude of the precipitation errors and that of their spatial correlation, while temporal correlation showed to be dispensable. The deterioration of model performance at some unmonitored reaches indicates the need for proper characterisation of model errors and spatial localisation techniques for hydrological applications. Finally, we evaluated stream flow forecasts for the Amazon basin based on initial conditions produced by the data assimilation scheme and using the ensemble stream flow prediction approach where the model is forced by past meteorological forcings. The resulting forecasts agreed well with the observations and maintained meaningful skill at large rivers even for long lead times, e.g. >90 days at the Solim

  14. The Vertical Structure of Relative Humidity and Ozone in the Tropical Upper Troposphere: Intercomparisons Among In Situ Observations, A-Train Measurements and Large-Scale Models

    Science.gov (United States)

    Selkirk, Henry B.; Manyin, Michael; Douglass, Anne R.; Oman, Luke; Pawson, Steven; Ott, Lesley; Benson, Craig; Stolarski, Richard

    2010-01-01

    In situ measurements in the tropics have shown that in regions of active convection, relative humidity with respect to ice in the upper troposphere is typically close to saturation on average, and supersaturations greater than 20% are not uncommon. Balloon soundings with the cryogenic frost point hygrometer (CFH) at Costa Rica during northern summer, for example, show this tendency to be strongest between 11 and 15.5 km (345-360 K potential temperature, or approximately 250-120 hPa). this is the altitude range of deep convective detrainment. Additionally, simultaneous ozonesonde measurements show that stratospheric air (O3 greater than 150 ppbv) can be found as low as approximately 14 km (350 K/150 hPa). In contrast, results from northern winter show a much drier upper troposphere and little penetration of stratospheric air below the tropopause at 17.5 km (approximately 383 K). We show that these results are consistent with in situ measurements from the Measurement of Ozone and water vapor by Airbus In-service airCraft (MOZAIC) program which samples a wider, though still limited, range of tropical locations. To generalize to the tropics as a whole, we compare our insitu results to data from two A-Train satellite instruments, the Atmospheric Infrared Sounder (AIRS) and the Microwave Limb Sounder (MLS) on the Aqua and Aura satellites respectively. Finally, we examine the vertical structure of water vapor, relative humidity and ozone in the NASA Goddard MERRA analysis, an assimilation dataset, and a new version of the GEOS CCM, a free-running chemistry-climate model. We demonstrate that conditional probability distributions of relative humidity and ozone are a sensitive diagnostic for assessing the representation of deep convection and upper troposphere/lower stratosphere mixing processes in large-scale analyses and climate models.

  15. Implementing an empirical scalar constitutive relation for ice with flow-induced polycrystalline anisotropy in large-scale ice sheet models

    Science.gov (United States)

    Graham, Felicity S.; Morlighem, Mathieu; Warner, Roland C.; Treverrow, Adam

    2018-03-01

    The microstructure of polycrystalline ice evolves under prolonged deformation, leading to anisotropic patterns of crystal orientations. The response of this material to applied stresses is not adequately described by the ice flow relation most commonly used in large-scale ice sheet models - the Glen flow relation. We present a preliminary assessment of the implementation in the Ice Sheet System Model (ISSM) of a computationally efficient, empirical, scalar, constitutive relation which addresses the influence of the dynamically steady-state flow-compatible induced anisotropic crystal orientation patterns that develop when ice is subjected to the same stress regime for a prolonged period - sometimes termed tertiary flow. We call this the ESTAR flow relation. The effect on ice flow dynamics is investigated by comparing idealised simulations using ESTAR and Glen flow relations, where we include in the latter an overall flow enhancement factor. For an idealised embayed ice shelf, the Glen flow relation overestimates velocities by up to 17 % when using an enhancement factor equivalent to the maximum value prescribed in the ESTAR relation. Importantly, no single Glen enhancement factor can accurately capture the spatial variations in flow across the ice shelf generated by the ESTAR flow relation. For flow line studies of idealised grounded flow over varying topography or variable basal friction - both scenarios dominated at depth by bed-parallel shear - the differences between simulated velocities using ESTAR and Glen flow relations depend on the value of the enhancement factor used to calibrate the Glen flow relation. These results demonstrate the importance of describing the deformation of anisotropic ice in a physically realistic manner, and have implications for simulations of ice sheet evolution used to reconstruct paleo-ice sheet extent and predict future ice sheet contributions to sea level.

  16. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  17. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  18. Comprehensive Approaches to Multiphase Flows in Geophysics - Application to nonisothermal, nonhomogenous, unsteady, large-scale, turbulent dusty clouds I. Hydrodynamic and Thermodynamic RANS and LES Models

    Energy Technology Data Exchange (ETDEWEB)

    S. Dartevelle

    2005-09-05

    The objective of this manuscript is to fully derive a geophysical multiphase model able to ''accommodate'' different multiphase turbulence approaches; viz., the Reynolds Averaged Navier-Stokes (RANS), the Large Eddy Simulation (LES), or hybrid RANSLES. This manuscript is the first part of a larger geophysical multiphase project--lead by LANL--that aims to develop comprehensive modeling tools for large-scale, atmospheric, transient-buoyancy dusty jets and plume (e.g., plinian clouds, nuclear ''mushrooms'', ''supercell'' forest fire plumes) and for boundary-dominated geophysical multiphase gravity currents (e.g., dusty surges, diluted pyroclastic flows, dusty gravity currents in street canyons). LES is a partially deterministic approach constructed on either a spatial- or a temporal-separation between the large and small scales of the flow, whereas RANS is an entirely probabilistic approach constructed on a statistical separation between an ensemble-averaged mean and higher-order statistical moments (the so-called ''fluctuating parts''). Within this specific multiphase context, both turbulence approaches are built up upon the same phasic binary-valued ''function of presence''. This function of presence formally describes the occurrence--or not--of any phase at a given position and time and, therefore, allows to derive the same basic multiphase Navier-Stokes model for either the RANS or the LES frameworks. The only differences between these turbulence frameworks are the closures for the various ''turbulence'' terms involving the unknown variables from the fluctuating (RANS) or from the subgrid (LES) parts. Even though the hydrodynamic and thermodynamic models for RANS and LES have the same set of Partial Differential Equations, the physical interpretations of these PDEs cannot be the same, i.e., RANS models an averaged field, while LES simulates a

  19. Large-scale computer-aided design

    OpenAIRE

    Adeli, Hojjat

    1997-01-01

    The author and his associates have been 'working on creating novel design theories and computational models with two broad objectives: automation and optimization. This paper is a summary of the author's Keynote Lecture based on the research done by the author and his associates recently. Novel neurocomputing algorithms are presented for large-scale computer-aided design and optimization. This research demonstrates how a new level is achieved in design automation through the ingenious use and...

  20. Large scale inhomogeneities and the cosmological principle

    International Nuclear Information System (INIS)

    Lukacs, B.; Meszaros, A.

    1984-12-01

    The compatibility of cosmologic principles and possible large scale inhomogeneities of the Universe is discussed. It seems that the strongest symmetry principle which is still compatible with reasonable inhomogeneities, is a full conformal symmetry in the 3-space defined by the cosmological velocity field, but even in such a case, the standard model is isolated from the inhomogeneous ones when the whole evolution is considered. (author)

  1. The consistency problems of large scale structure

    International Nuclear Information System (INIS)

    Schramm, D.N.

    1986-01-01

    Studies of the early universe are reviewed, with emphasis on galaxy formation, dark matter and the generation of large scale structure. The paper was presented at the conference on ''The early universe and its evolution'', Erice, Italy, 1986. Dark matter, Big Bang nucleosynthesis, baryonic halos, flatness arguments, cosmological constant, galaxy formation, neutrinos plus strings or explosions and string models, are all discussed. (U.K.)

  2. Coupled Large Scale Hydro-mechanical Modelling for cap-rock Failure Risk Assessment of CO2 Storage in Deep Saline Aquifers

    International Nuclear Information System (INIS)

    Rohmer, J.; Seyedi, D.M.

    2010-01-01

    This work presents a numerical strategy of large scale hydro-mechanical simulations to assess the risk of damage in cap-rock formations during a CO 2 injection process. The proposed methodology is based on the development of a sequential coupling between a multiphase fluid flow (TOUGH2) and a hydro-mechanical calculation code (Code-Aster) that enables us to perform coupled hydro-mechanical simulation at a regional scale. The likelihood of different cap-rock damage mechanisms can then be evaluated based on the results of the coupled simulations. A scenario based approach is proposed to take into account the effect of the uncertainty of model parameters on damage likelihood. The developed methodology is applied for the cap-rock failure analysis of deep aquifer of the Dogger formation in the context of the Paris basin multilayered geological system as a demonstration example. The simulation is carried out at a regional scale (100 km) considering an industrial mass injection rate of CO 2 of 10 Mt/y. The assessment of the stress state after 10 years of injection is conducted through the developed sequential coupling. Two failure mechanisms have been taken into account, namely the tensile fracturing and the shear slip reactivation of pre-existing fractures. To deal with the large uncertainties due to sparse data on the layer formations, a scenario based strategy is undertaken. It consists in defining a first reference modelling scenario considering the mean values of the hydro-mechanical properties for each layer. A sensitivity analysis is then carried out and shows the importance of both the initial stress state and the reservoir hydraulic properties on the cap-rock failure tendency. On this basis, a second scenario denoted 'critical' is defined so that the most influential model parameters are taken in their worst configuration. None of these failure criteria is activated for the considered conditions. At a phenomenological level, this study points out three key

  3. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  4. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  5. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  6. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  7. Large scale homing in honeybees.

    Directory of Open Access Journals (Sweden)

    Mario Pahl

    Full Text Available Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama.

  8. A Third-Order Item Response Theory Model for Modeling the Effects of Domains and Subdomains in Large-Scale Educational Assessment Surveys

    Science.gov (United States)

    Rijmen, Frank; Jeon, Minjeong; von Davier, Matthias; Rabe-Hesketh, Sophia

    2014-01-01

    Second-order item response theory models have been used for assessments consisting of several domains, such as content areas. We extend the second-order model to a third-order model for assessments that include subdomains nested in domains. Using a graphical model framework, it is shown how the model does not suffer from the curse of…

  9. Effects of upper-surface blowing and thrust vectoring on low-speed aerodynamic characteristics of a large-scale supersonic transport model

    Science.gov (United States)

    Coe, P. L., Jr.; Mclemore, H. C.; Shivers, J. P.

    1975-01-01

    Tests were conducted in the Langley full-scale tunnel to determine the low-speed aerodynamic characteristics of a large-scale arrow-wing supersonic transport configured with engines mounted above the wing for upper surface blowing, and conventional lower surface engines with provisions for thrust vectoring. A limited number of tests were conducted for the upper surface engine configuration in the high lift condition for beta = 10 in order to evaluate lateral directional characteristics, and with the right engine inoperative to evaluate the engine out condition.

  10. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  11. Large-Scale Information Systems

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  12. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  13. Image-processing of time-averaged interface distributions representing CCFL characteristics in a large scale model of a PWR hot-leg pipe geometry

    International Nuclear Information System (INIS)

    Al Issa, Suleiman; Macián-Juan, Rafael

    2017-01-01

    Highlights: • CCFL characteristics are investigated in PWR large-scale hot-leg pipe geometry. • Image processing of air-water interface produced time-averaged interface distributions. • Time-averages provide a comparative method of CCFL characteristics among different studies. • CCFL correlations depend upon the range of investigated water delivery for Dh ≫ 50 mm. • 1D codes are incapable of investigating CCFL because of lack of interface distribution. - Abstract: Countercurrent Flow Limitation (CCFL) was experimentally investigated in a 1/3.9 downscaled COLLIDER facility with a 190 mm pipe’s diameter using air/water at 1 atmospheric pressure. Previous investigations provided knowledge over the onset of CCFL mechanisms. In current article, CCFL characteristics at the COLLIDER facility are measured and discussed along with time-averaged distributions of the air/water interface for a selected matrix of liquid/gas velocities. The article demonstrates the time-averaged interface as a useful method to identify CCFL characteristics at quasi-stationary flow conditions eliminating variations that appears in single images, and showing essential comparative flow features such as: the degree of restriction at the bend, the extension and the intensity of the two-phase mixing zones, and the average water level within the horizontal part and the steam generator. Consequently, making it possible to compare interface distributions obtained at different investigations. The distributions are also beneficial for CFD validations of CCFL as the instant chaotic gas/liquid interface is impossible to reproduce in CFD simulations. The current study shows that final CCFL characteristics curve (and the corresponding CCFL correlation) depends upon the covered measuring range of water delivery. It also shows that a hydraulic diameter should be sufficiently larger than 50 mm in order to obtain CCFL characteristics comparable to the 1:1 scale data (namely the UPTF data). Finally

  14. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  15. Modeling of Carbon Tetrachloride Flow and Transport in the Subsurface of the 200 West Disposal Sites: Large-Scale Model Configuration and Prediction of Future Carbon Tetrachloride Distribution Beneath the 216-Z-9 Disposal Site

    Energy Technology Data Exchange (ETDEWEB)

    Oostrom, Mart; Thorne, Paul D.; Zhang, Z. F.; Last, George V.; Truex, Michael J.

    2008-12-17

    Three-dimensional simulations considered migration of dense, nonaqueous phase liquid (DNAPL) consisting of CT and co disposed organics in the subsurface as a function of the properties and distribution of subsurface sediments and of the properties and disposal history of the waste. Simulations of CT migration were conducted using the Water-Oil-Air mode of Subsurface Transport Over Multiple Phases (STOMP) simulator. A large-scale model was configured to model CT and waste water discharge from the major CT and waste-water disposal sites.

  16. Effects of upper-surface blowing and thrust vectoring on low speed aerodynamic characteristics of a large-scale supersonic transport model

    Science.gov (United States)

    Coe, P. L., Jr.; Mclemore, H. C.; Shivers, J. P.

    1976-01-01

    Tests were conducted in a full scale tunnel to determine the low speed aerodynamic characteristics of a large scale arrow wing supersonic transport configured with engines mounted above the wing for upper surface blowing and conventional lower surface engines having provisions for thrust vectoring. Tests were conducted over an angle of attack range of -10 deg to 34 deg and for Reynolds numbers (based on the mean aerodynamic chord) of 5.17 x 1 million and 3.89 x 1 million. A limited number of tests were also conducted for the upper surface engine configuration in the high lift condition at an angle of sideslip of 10 deg in order to evaluate lateral directional characteristics and with the right engine inoperative in order to evaluate the engine out condition.

  17. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  18. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  19. Large scale collective modeling the final 'freeze out' stages of energetic heavy ion reactions and calculation of single particle measurables from these models

    International Nuclear Information System (INIS)

    Nyiri, Agnes

    2005-01-01

    The goal of this PhD project was to develop the already existing, but far not complete Multi Module Model, specially focusing on the last module which describes the final stages of a heavy ion collision, as this module was still missing. The major original achievements summarized in this thesis correspond to the freeze out problem and calculation of an important measurable, the anisotropic flow. Summary of results: Freeze out: The importance of freeze out models is that they allow the evaluation of observables, which then can be compared to the experimental results. Therefore, it is crucial to find a realistic freeze out description, which is proved to be a non-trivial task. Recently, several kinetic freeze out models have been developed. Based on the earlier results, we have introduced new ideas and improved models, which may contribute to a more realistic description of the freeze out process. We have investigated the applicability of the Boltzmann Transport Equation (BTE) to describe dynamical freeze out. We have introduced the so-called Modified Boltzmann Transport Equation, which has a form very similar to that of the BTE, but takes into account those characteristics of the FO process which the BTE can not handle, e.g. the rapid change of the phase-space distribution function in the direction normal to the finite FO layer. We have shown that the main features of earlier ad hoc kinetic FO models can be obtained from BTE and MBTE. We have discussed the qualitative differences between the two approaches and presented some quantitative comparison as well. Since the introduced modification of the BTE makes it very difficult to solve the FO problem from the first principles, it is important to work out simplified phenomenological models, which can explain the basic features of the FO process. We have built and discussed such a model. Flow analysis: The other main subject of this thesis has been the collective flow in heavy ion collisions. Collective flow from ultra

  20. Large scale collective modeling the final 'freeze out' stages of energetic heavy ion reactions and calculation of single particle measurables from these models

    Energy Technology Data Exchange (ETDEWEB)

    Nyiri, Agnes

    2005-07-01

    The goal of this PhD project was to develop the already existing, but far not complete Multi Module Model, specially focusing on the last module which describes the final stages of a heavy ion collision, as this module was still missing. The major original achievements summarized in this thesis correspond to the freeze out problem and calculation of an important measurable, the anisotropic flow. Summary of results: Freeze out: The importance of freeze out models is that they allow the evaluation of observables, which then can be compared to the experimental results. Therefore, it is crucial to find a realistic freeze out description, which is proved to be a non-trivial task. Recently, several kinetic freeze out models have been developed. Based on the earlier results, we have introduced new ideas and improved models, which may contribute to a more realistic description of the freeze out process. We have investigated the applic