WorldWideScience

Sample records for model large-scale printhead

  1. Bubbles in inkjet printheads : analytical and numerical models

    NARCIS (Netherlands)

    Jeurissen, Roger Josef Maria

    2009-01-01

    The phenomenon of nozzle failure of an inkjet printhead due to entrainment of air bubbles was studies using analytical and numerical models. The studied inkjet printheads consist of many channels in which an acoustic field is generated to eject a droplet. When an air bubble is entrained, it disrupts

  2. Models of large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C.S. (Physics Dept., Univ. of Durham (UK))

    1991-01-01

    The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, {Omega}> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.).

  3. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  4. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  5. Large scale topic modeling made practical

    DEFF Research Database (Denmark)

    Wahlgreen, Bjarne Ørum; Hansen, Lars Kai

    2011-01-01

    Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number of docume......Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number...... topics at par with a much larger case specific vocabulary....

  6. Large-scale multimedia modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  7. Drop-on-Demand Inkjet Printhead Performance Enhancement by Dynamic Lumped Element Modeling for Printable Electronics Fabrication

    Directory of Open Access Journals (Sweden)

    Maowei He

    2014-01-01

    Full Text Available The major challenge in printable electronics fabrication is the print resolution and accuracy. In this paper, the dynamic lumped element model (DLEM is proposed to directly simulate an inkjet-printed nanosilver droplet formation process and used for predictively controlling jetting characteristics. The static lumped element model (LEM previously developed by the authors is extended to dynamic model with time-varying equivalent circuits to characterize nonlinear behaviors of piezoelectric printhead. The model is then used to investigate how performance of the piezoelectric ceramic actuator influences jetting characteristics of nanosilver ink. Finally, the proposed DLEM is applied to predict the printing quality using nanosilver ink. Experimental results show that, compared to other analytic models, the proposed DLEM has a simpler structure with the sufficient simulation and prediction accuracy.

  8. Advances in large-scale crop modeling

    Science.gov (United States)

    Scholze, Marko; Bondeau, Alberte; Ewert, Frank; Kucharik, Chris; Priess, Jörg; Smith, Pascalle

    Intensified human activity and a growing population have changed the climate and the land biosphere. One of the most widely recognized human perturbations is the emission of carbon dioxide (C02) by fossil fuel burning and land-use change. As the terrestrial biosphere is an active player in the global carbon cycle, changes in land use feed back to the climate of the Earth through regulation of the content of atmospheric CO2, the most important greenhouse gas,and changing albedo (e.g., energy partitioning).Recently, the climate modeling community has started to develop more complex Earthsystem models that include marine and terrestrial biogeochemical processes in addition to the representation of atmospheric and oceanic circulation. However, most terrestrial biosphere models simulate only natural, or so-called potential, vegetation and do not account for managed ecosystems such as croplands and pastures, which make up nearly one-third of the Earth's land surface.

  9. Large Scale, High Resolution, Mantle Dynamics Modeling

    Science.gov (United States)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  10. Analysis of DoD inkjet printhead performance for printable electronics fabrication using dynamic lumped element modeling and swarm intelligence based optimal prediction

    Institute of Scientific and Technical Information of China (English)

    何茂伟; 孙丽玲; 胡琨元; 朱云龙; 陈瀚宁

    2015-01-01

    The major challenge in printable electronics fabrication is to effectively and accurately control a drop-on-demand (DoD) inkjet printhead for high printing quality. In this work, an optimal prediction model, constructed with the lumped element modeling (LEM) and the artificial bee colony (ABC) algorithm, was proposed to efficiently predict the combination of waveform parameters for obtaining the desired droplet properties. For acquiring higher simulation accuracy, a modified dynamic lumped element model (DLEM) was proposed with time-varying equivalent circuits, which can characterize the nonlinear behaviors of piezoelectric printhead. The proposed method was then applied to investigate the influences of various waveform parameters on droplet volume and velocity of nano-silver ink, and to predict the printing quality using nano-silver ink. Experimental results show that, compared with two-dimension manual search, the proposed optimal prediction model perform efficiently and accurately in searching the appropriate combination of waveform parameters for printable electronics fabrication.

  11. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...... are presented as the small-scale model underpredicts the overtopping discharge....

  12. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  13. One-dimensional adhesion model for large scale structures

    Directory of Open Access Journals (Sweden)

    Kayyunnapara Thomas Joseph

    2010-05-01

    Full Text Available We discuss initial value problems and initial boundary value problems for some systems of partial differential equations appearing in the modelling for the large scale structure formation in the universe. We restrict the initial data to be bounded measurable and locally bounded variation function and use Volpert product to justify the product which appear in the equation. For more general initial data in the class of generalized functions of Colombeau, we construct the solution in the sense of association.

  14. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  15. Statistical Modeling of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  16. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate......Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  17. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    , among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which....... The simulation results show that the gray and non-gray calculations of the same oxy-fuel WSGGM make distinctly different predictions in the wall radiative heat transfer, incident radiative flux, radiative source, gas temperature and species profiles. In relative to the non-gray implementation, the gray...

  18. Order reduction of large-scale linear oscillatory system models

    Energy Technology Data Exchange (ETDEWEB)

    Trudnowksi, D.J. (Pacific Northwest Lab., Richland, WA (United States))

    1994-02-01

    Eigen analysis and signal analysis techniques of deriving representations of power system oscillatory dynamics result in very high-order linear models. In order to apply many modern control design methods, the models must be reduced to a more manageable order while preserving essential characteristics. Presented in this paper is a model reduction method well suited for large-scale power systems. The method searches for the optimal subset of the high-order model that best represents the system. An Akaike information criterion is used to define the optimal reduced model. The method is first presented, and then examples of applying it to Prony analysis and eigenanalysis models of power systems are given.

  19. Modelling large-scale halo bias using the bispectrum

    CERN Document Server

    Pollack, Jennifer E; Porciani, Cristiano

    2011-01-01

    We study the relation between the halo and matter density fields -- commonly termed bias -- in the LCDM framework. In particular, we examine the local model of biasing at quadratic order in matter density. This model is characterized by parameters b_1 and b_2. Using an ensemble of N-body simulations, we apply several statistical methods to estimate the parameters. We measure halo and matter fluctuations smoothed on various scales and find that the parameters vary with smoothing scale. We argue that, for real-space measurements, owing to the mixing of wavemodes, no scale can be found for which the parameters are independent of smoothing. However, this is not the case in Fourier space. We measure halo power spectra and construct estimates for an effective large-scale bias. We measure the configuration dependence of the halo bispectra B_hhh and reduced bispectra Q_hhh for very large-scale k-space triangles. From this we constrain b_1 and b_2. Using the lowest-order perturbation theory, we find that for B_hhh the...

  20. Large scale stochastic spatio-temporal modelling with PCRaster

    Science.gov (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  1. Modeling The Large Scale Bias of Neutral Hydrogen

    CERN Document Server

    Marin, Felipe; Seo, Hee-Jong; Vallinotto, Alberto

    2009-01-01

    We present analytical estimates of the large scale bias of neutral Hydrogen (HI) based on the Halo Occupation Distribution formalism. We use a simple, non-parametric model which monotonically relates the total mass of a halo with its HI mass at zero redshift; for earlier times we assume limiting models for the HI density parameter evolution, consistent with the data presently available, as well as two main scenarios for the evolution of our HI mass - Halo mass relation. We find that both the linear and the first non-linear bias terms exhibit a remarkable evolution with redshift, regardless of the specific limiting model assumed for the HI evolution. These analytical predictions are then shown to be consistent with measurements performed on the Millennium Simulation. Additionally, we show that this strong bias evolution does not sensibly affect the measurement of the HI Power Spectrum.

  2. Large-scale Modeling of Inundation in the Amazon Basin

    Science.gov (United States)

    Luo, X.; Li, H. Y.; Getirana, A.; Leung, L. R.; Tesfa, T. K.

    2015-12-01

    Flood events have impacts on the exchange of energy, water and trace gases between land and atmosphere, hence potentially affecting the climate. The Amazon River basin is the world's largest river basin. Seasonal floods occur in the Amazon Basin each year. The basin being characterized by flat gradients, backwater effects are evident in the river dynamics. This factor, together with large uncertainties in river hydraulic geometry, surface topography and other datasets, contribute to difficulties in simulating flooding processes over this basin. We have developed a large-scale inundation scheme in the framework of the Model for Scale Adaptive River Transport (MOSART) river routing model. Both the kinematic wave and the diffusion wave routing methods are implemented in the model. A new process-based algorithm is designed to represent river channel - floodplain interactions. Uncertainties in the input datasets are partly addressed through model calibration. We will present the comparison of simulated results against satellite and in situ observations and analysis to understand factors that influence inundation processes in the Amazon Basin.

  3. Modelling large-scale evacuation of music festivals

    Directory of Open Access Journals (Sweden)

    E. Ronchi

    2016-05-01

    Full Text Available This paper explores the use of multi-agent continuous evacuation modelling for representing large-scale evacuation scenarios at music festivals. A 65,000 people capacity music festival area was simulated using the model Pathfinder. Three evacuation scenarios were developed in order to explore the capabilities of evacuation modelling during such incidents, namely (1 a preventive evacuation of a section of the festival area containing approximately 15,000 people due to a fire breaking out on a ship, (2 an escalating scenario involving the total evacuation of the entire festival area (65,000 people due to a bomb threat, and (3 a cascading scenario involving the total evacuation of the entire festival area (65,000 people due to the threat of an explosion caused by a ship engine overheating. This study suggests that the analysis of the people-evacuation time curves produced by evacuation models, coupled with a visual analysis of the simulated evacuation scenarios, allows for the identification of the main factors affecting the evacuation process (e.g., delay times, overcrowding at exits in relation to exit widths, etc. and potential measures that could improve safety.

  4. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  5. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  6. A first large-scale flood inundation forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  7. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  8. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  9. Do land parameters matter in large-scale hydrological modelling?

    Science.gov (United States)

    Gudmundsson, Lukas; Seneviratne, Sonia I.

    2013-04-01

    Many of the most pending issues in large-scale hydrology are concerned with predicting hydrological variability at ungauged locations. However, current-generation hydrological and land surface models that are used for their estimation suffer from large uncertainties. These models rely on mathematical approximations of the physical system as well as on mapped values of land parameters (e.g. topography, soil types, land cover) to predict hydrological variables (e.g. evapotranspiration, soil moisture, stream flow) as a function of atmospheric forcing (e.g. precipitation, temperature, humidity). Despite considerable progress in recent years, it remains unclear whether better estimates of land parameters can improve predictions - or - if a refinement of model physics is necessary. To approach this question we suggest scrutinizing our perception of hydrological systems by confronting it with the radical assumption that hydrological variability at any location in space depends on past and present atmospheric forcing only, and not on location-specific land parameters. This so called "Constant Land Parameter Hypothesis (CLPH)" assumes that variables like runoff can be predicted without taking location specific factors such as topography or soil types into account. We demonstrate, using a modern statistical tool, that monthly runoff in Europe can be skilfully estimated using atmospheric forcing alone, without accounting for locally varying land parameters. The resulting runoff estimates are used to benchmark state-of-the-art process models. These are found to have inferior performance, despite their explicit process representation, which accounts for locally varying land parameters. This suggests that progress in the theory of hydrological systems is likely to yield larger improvements in model performance than more precise land parameter estimates. The results also question the current modelling paradigm that is dominated by the attempt to account for locally varying land

  10. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  11. Advances and visions in large-scale hydrological modelling: findings from the 11th workshop on large-scale hydrological modelling

    NARCIS (Netherlands)

    Döll, P.; Berkhoff, K.; Bormann, H.; Fohrer, N.; Gerten, D.; Hagemann, S.; Krol, Martinus S.

    2008-01-01

    Large-scale hydrological modelling has become increasingly wide-spread during the last decade. An annual workshop series on large-scale hydrological modelling has provided, since 1997, a forum to the German-speaking community for discussing recent developments and achievements in this research area.

  12. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    algorithm propagates the flows for each simulated event. The model incorporates a digital terrain model (DTM) at 10m horizontal resolution, which is used to extract flood plain cross-sections such that a one-dimensional hydraulic model can be used to estimate extent and elevation of flooding. In doing so the effect of flood defenses in mitigating floods are accounted for. Finally a suite of vulnerability relationships have been developed to estimate flood losses for a portfolio of properties that are exposed to flood hazard. Historical experience indicates that a for recent floods in Great Britain more than 50% of insurance claims occur outside the flood plain and these are primarily a result of excess surface flow, hillside flooding, flooding due to inadequate drainage. A sub-component of the model addresses this issue by considering several parameters that best explain the variability of claims off the flood plain. The challenges of modeling such a complex phenomenon at a large scale largely dictate the choice of modeling approaches that need to be adopted for each of these model components. While detailed numerically-based physical models exist and have been used for conducting flood hazard studies, they are generally restricted to small geographic regions. In a probabilistic risk estimation framework like our current model, a blend of deterministic and statistical techniques have to be employed such that each model component is independent, physically sound and is able to maintain the statistical properties of observed historical data. This is particularly important because of the highly non-linear behavior of the flooding process. With respect to vulnerability modeling, both on and off the flood plain, the challenges include the appropriate scaling of a damage relationship when applied to a portfolio of properties. This arises from the fact that the estimated hazard parameter used for damage assessment, namely maximum flood depth has considerable uncertainty. The

  13. Advances and visions in large-scale hydrological modelling: findings from the 11th Workshop on Large-Scale Hydrological Modelling

    Directory of Open Access Journals (Sweden)

    P. Döll

    2008-10-01

    Full Text Available Large-scale hydrological modelling has become increasingly wide-spread during the last decade. An annual workshop series on large-scale hydrological modelling has provided, since 1997, a forum to the German-speaking community for discussing recent developments and achievements in this research area. In this paper we present the findings from the 2007 workshop which focused on advances and visions in large-scale hydrological modelling. We identify the state of the art, difficulties and research perspectives with respect to the themes "sensitivity of model results", "integrated modelling" and "coupling of processes in hydrosphere, atmosphere and biosphere". Some achievements in large-scale hydrological modelling during the last ten years are presented together with a selection of remaining challenges for the future.

  14. A BAROTROPIC QUASI-GEOSTROPHIC MODEL WITH LARGE-SCALE TOPOGRAPHY, FRICTION AND HEATING

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Based on the barotropic equations including large-scale topography, friction and heat factor, a barotropic quasi-geostrophic model with large-scale topography, friction and heating is obtained by means of scale analysis and small parameter method. It is shown that this equation is a basic one, which is used to study the influence of the Tibetan Plateau on the large-scale flow in the atmosphere. If the friction and heating effect of large-scale topography are neglected, this model will degenerate to the general barotropic quasi-geostrophic one.

  15. Reduction of large-scale numerical ground water flow models

    NARCIS (Netherlands)

    Vermeulen, P.T.M.; Heemink, A.W.; Testroet, C.B.M.

    2002-01-01

    Numerical models are often used for simulating ground water flow. Written in state space form, the dimension of these models is of the order of the number of model cells and can be very high (> million). As a result, these models are computationally very demanding, especially if many different scena

  16. Conclusions of the NATO ARW on Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  17. Long-Run Properties of Large-Scale Macroeconometric Models

    OpenAIRE

    Kenneth F. WALLIS-; John D. WHITLEY

    1987-01-01

    We consider alternative approaches to the evaluation of the long-run properties of dynamic nonlinear macroeconometric models, namely dynamic simulation over an extended database, or the construction and direct solution of the steady-state version of the model. An application to a small model of the UK economy is presented. The model is found to be unstable, but a stable form can be produced by simple alterations to the structure.

  18. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  19. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model buil

  20. Large scale modelling of catastrophic floods in Italy

    Science.gov (United States)

    Azemar, Frédéric; Nicótina, Ludovico; Sassi, Maximiliano; Savina, Maurizio; Hilberts, Arno

    2017-04-01

    The RMS European Flood HD model® is a suite of country scale flood catastrophe models covering 13 countries throughout continental Europe and the UK. The models are developed with the goal of supporting risk assessment analyses for the insurance industry. Within this framework RMS is developing a hydrologic and inundation model for Italy. The model aims at reproducing the hydrologic and hydraulic properties across the domain through a modeling chain. A semi-distributed hydrologic model that allows capturing the spatial variability of the runoff formation processes is coupled with a one-dimensional river routing algorithm and a two-dimensional (depth averaged) inundation model. This model setup allows capturing the flood risk from both pluvial (overland flow) and fluvial flooding. Here we describe the calibration and validation methodologies for this modelling suite applied to the Italian river basins. The variability that characterizes the domain (in terms of meteorology, topography and hydrologic regimes) requires a modeling approach able to represent a broad range of meteo-hydrologic regimes. The calibration of the rainfall-runoff and river routing models is performed by means of a genetic algorithm that identifies the set of best performing parameters within the search space over the last 50 years. We first establish the quality of the calibration parameters on the full hydrologic balance and on individual discharge peaks by comparing extreme statistics to observations over the calibration period on several stations. The model is then used to analyze the major floods in the country; we discuss the different meteorological setup leading to the historical events and the physical mechanisms that induced these floods. We can thus assess the performance of RMS' hydrological model in view of the physical mechanisms leading to flood and highlight the main controls on flood risk modelling throughout the country. The model's ability to accurately simulate antecedent

  1. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper;

    2003-01-01

    for improvement of the reliability of physical model results. This paper demonstrates by examples that numerical modelling benefits in various ways from experimental studies (in large and small laboratory facilities). The examples range from very general hydrodynamic descriptions of wave phenomena to specific......Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...... hydrodynamic interaction with structures. The examples also show that numerical model development benefits from international co-operation and sharing of high quality results....

  2. Modelling the spreading of large-scale wildland fires

    CERN Document Server

    Drissi, Mohamed

    2014-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning cells that strongly depends on local conditions of wind, topography, and vegetation. Radiation and convection from the flaming zone, and radiative heat loss to the ambient are considered in the preheating process of unburned cells. Second, the model is applied to an Australian grassland fire experiment as well as to a real fire that took place in Corsica in 2009. Predictions compare favorably to experiments in terms of rate of spread, area and shape of the burn. Finally, the sensitivity of the model outcomes (here the rate of spread) to six input parameters is studied using a two-level full factorial design.

  3. Running Large-Scale Air Pollution Models on Parallel Computers

    DEFF Research Database (Denmark)

    Georgiev, K.; Zlatev, Z.

    2000-01-01

    Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria.......Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria....

  4. A Large Scale, High Resolution Agent-Based Insurgency Model

    Science.gov (United States)

    2013-09-30

    2007). HSCB Models can be employed for simulating mission scenarios, determining optimal strategies for disrupting terrorist networks, or training and...High Resolution Agent-Based Insurgency Model ∑ = ⎜ ⎜ ⎝ ⎛ − −− = desired 1 move,desired, desired,,desired, desired,, N j ij jmoveij moveiD rp prp

  5. Misspecified poisson regression models for large-scale registry data

    DEFF Research Database (Denmark)

    Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.

    2016-01-01

    working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...

  6. Modelling expected train passenger delays on large scale railway networks

    DEFF Research Database (Denmark)

    Landex, Alex; Nielsen, Otto Anker

    2006-01-01

    Forecasts of regularity for railway systems have traditionally – if at all – been computed for trains, not for passengers. Relatively recently it has become possible to model and evaluate the actual passenger delays by a passenger regularity model for the operation already carried out. First...

  7. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    on individual behaviour in the model specification, (ii) proposing a method to use disaggregate Revealed Preference (RP) data to estimate utility functions and provide evidence on the value of congestion and the value of reliability, (iii) providing a method to account for individual mis...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real...... non-universal choice sets and (ii) flow distribution according to random utility maximisation theory. One model allows distinction between used and unused routes based on the distribution of the random error terms, while the other model allows this distinction by posing restrictions on the costs...

  8. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  9. Large-Scale Tests of the DGP Model

    CERN Document Server

    Song, Y S; Hu, W; Song, Yong-Seon; Sawicki, Ignacy; Hu, Wayne

    2006-01-01

    The self-accelerating braneworld model (DGP) can be tested from measurements of the expansion history of the universe and the formation of structure. Current constraints on the expansion history from supernova luminosity distances, the CMB, and the Hubble constant exclude the simplest flat DGP model at about 3sigma. The best-fit open DGP model is, however, only a marginally poorer fit to the data than flat LCDM. Its substantially different expansion history raises structure formation challenges for the model. A dark-energy model with the same expansion history would predict a highly significant discrepancy with the baryon oscillation measurement due the high Hubble constant required and a large enhancement of CMB anisotropies at the lowest multipoles due to the ISW effect. For the DGP model to satisfy these constraints new gravitational phenomena would have to appear at the non-linear and cross-over scales respectively. A prediction of the DGP expansion history in a region where the phenomenology is well unde...

  10. Simulation of large-scale rule-based models

    Energy Technology Data Exchange (ETDEWEB)

    Hlavacek, William S [Los Alamos National Laboratory; Monnie, Michael I [Los Alamos National Laboratory; Colvin, Joshua [NON LANL; Faseder, James [NON LANL

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  11. Sediment Yield Modeling in a Large Scale Drainage Basin

    Science.gov (United States)

    Ali, K.; de Boer, D. H.

    2009-05-01

    This paper presents the findings of spatially distributed sediment yield modeling in the upper Indus River basin. Spatial erosion rates calculated by using the Thornes model at 1-kilometre spatial resolution and monthly time scale indicate that 87 % of the annual gross erosion takes place in the three summer months. The model predicts a total annual erosion rate of 868 million tons, which is approximately 4.5 times the long- term observed annual sediment yield of the basin. Sediment delivery ratios (SDR) are hypothesized to be a function of the travel time of surface runoff from catchment cells to the nearest downstream channel. Model results indicate that higher delivery ratios (SDR > 0.6) are found in 18 % of the basin area, mostly located in the high-relief sub-basins and in the areas around the Nanga Parbat Massif. The sediment delivery ratio is lower than 0.2 in 70 % of the basin area, predominantly in the low-relief sub-basins like the Shyok on the Tibetan Plateau. The predicted annual basin sediment yield is 244 million tons which compares reasonably to the measured value of 192.5 million tons. The average annual specific sediment yield in the basin is predicted as 1110 tons per square kilometre. Model evaluation based on accuracy statistics shows very good to satisfactory performance ratings for predicted monthly basin sediment yields and for mean annual sediment yields of 17 sub-basins. This modeling framework mainly requires global datasets, and hence can be used to predict erosion and sediment yield in other ungauged drainage basins.

  12. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    with building a plantwide model-based optimization layer, which searches for optimal values regarding the pretreatment temperature, enzyme dosage in liquefaction, and yeast seed in fermentation such that profit is maximized [7]. When biomass is pretreated, by-products are also created that affect the downstream...... processes acting as inhibitors in enzymatic hydrolysis and fermentation. Therefore, the biorefinery is treated in an integrated manner capturing the trade-offs between the conversion steps. Sensitivity and uncertainty analysis is also performed in order to identify the modeling bottlenecks and which...

  13. Modelling large scale human activity in San Francisco

    Science.gov (United States)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  14. Large scale semantic 3D modeling of the urban landscape

    NARCIS (Netherlands)

    I. Esteban Lopez

    2012-01-01

    Modeling and understanding large urban areas is becoming an important topic in a world were everything is being digitized. A semantic and accurate 3D representation of a city can be used in many applications such as event and security planning and management, assisted navigation, autonomous operatio

  15. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  16. Soil carbon management in large-scale Earth system modelling

    DEFF Research Database (Denmark)

    Olin, S.; Lindeskog, M.; Pugh, T. A. M.;

    2015-01-01

    Croplands are vital ecosystems for human well-being and provide important ecosystem services such as crop yields, retention of nitrogen and carbon storage. On large (regional to global)-scale levels, assessment of how these different services will vary in space and time, especially in response to...... modelling C–N interactions in agricultural ecosystems under future environmental change and the effects these have on terrestrial biogeochemical cycles....

  17. Multistability in Large Scale Models of Brain Activity.

    Directory of Open Access Journals (Sweden)

    Mathieu Golos

    2015-12-01

    Full Text Available Noise driven exploration of a brain network's dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network's capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain's dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system's attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i a uniform activation threshold or (ii a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the "resting state" condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors.

  18. Disease Modeling via Large-Scale Network Analysis

    Science.gov (United States)

    2015-05-20

    enzymes, ion channels , G- protein-coupled receptors, and nuclear receptors). Our methods incorporate information from eight other model organisms, namely...specific genes to traits and diseases , especially polygenic traits, which are the most challenging. We are also interested in developing theoretical...plant traits relevant to desirable agricultural properties to important human diseases . Our methods, Katz on 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND

  19. Improving large-scale groundwater models by considering fossil gradients

    Science.gov (United States)

    Schulz, Stephan; Walther, Marc; Michelsen, Nils; Rausch, Randolf; Dirks, Heiko; Al-Saud, Mohammed; Merz, Ralf; Kolditz, Olaf; Schüth, Christoph

    2017-05-01

    Due to limited availability of surface water, many arid to semi-arid countries rely on their groundwater resources. Despite the quasi-absence of present day replenishment, some of these groundwater bodies contain large amounts of water, which was recharged during pluvial periods of the Late Pleistocene to Early Holocene. These mostly fossil, non-renewable resources require different management schemes compared to those which are usually applied in renewable systems. Fossil groundwater is a finite resource and its withdrawal implies mining of aquifer storage reserves. Although they receive almost no recharge, some of them show notable hydraulic gradients and a flow towards their discharge areas, even without pumping. As a result, these systems have more discharge than recharge and hence are not in steady state, which makes their modelling, in particular the calibration, very challenging. In this study, we introduce a new calibration approach, composed of four steps: (i) estimating the fossil discharge component, (ii) determining the origin of fossil discharge, (iii) fitting the hydraulic conductivity with a pseudo steady-state model, and (iv) fitting the storage capacity with a transient model by reconstructing head drawdown induced by pumping activities. Finally, we test the relevance of our approach and evaluated the effect of considering or ignoring fossil gradients on aquifer parameterization for the Upper Mega Aquifer (UMA) on the Arabian Peninsula.

  20. Large Scale Simulations of the Kinetic Ising Model

    Science.gov (United States)

    Münkel, Christian

    We present Monte Carlo simulation results for the dynamical critical exponent z of the two- and three-dimensional kinetic Ising model. The z-values were calculated from the magnetization relaxation from an ordered state into the equilibrium state at Tc for very large systems with up to (169984)2 and (3072)3 spins. To our knowledge, these are the largest Ising-systems simulated todate. We also report the successful simulation of very large lattices on a massively parallel MIMD computer with high speedups of approximately 1000 and an efficiency of about 0.93.

  1. Soil carbon management in large-scale Earth system modelling

    DEFF Research Database (Denmark)

    Olin, S.; Lindeskog, M.; Pugh, T. A. M.

    2015-01-01

    .5. Our results show that the potential for carbon sequestration due to typical cropland management practices such as no-till management and cover crops proposed in previous studies is not realised, globally or over larger climatic regions. Our results highlight important considerations to be made when......Croplands are vital ecosystems for human well-being and provide important ecosystem services such as crop yields, retention of nitrogen and carbon storage. On large (regional to global)-scale levels, assessment of how these different services will vary in space and time, especially in response...... to cropland management, are scarce. We explore cropland management alternatives and the effect these can have on future C and N pools and fluxes using the land-use-enabled dynamic vegetation model LPJ-GUESS (Lund–Potsdam–Jena General Ecosystem Simulator). Simulated crop production, cropland carbon storage...

  2. Bulk Motions in Large-Scale Void Models

    CERN Document Server

    Tomita, K

    1999-01-01

    To explain the puzzling situation in the observed bulk flows on scales $\\sim 150 h^{-1}$ Mpc ($H_0 = 100 h^{-1}$ km sec$^{-1}$ Mpc$^{-1}$), we consider the observational behavior of spherically symmetric inhomogeneous cosmological models, which consist of inner and outer homogeneous regions connected by a shell or an intermediate self-similar region. It is assumed that the present matter density parameter in the inner region is smaller than that in the outer region, and the present Hubble parameter in the inner region is larger than that in the outer region. Then galaxies in the inner void-like region can be seen to have a bulk motion relative to matter in the outer region, when we observe them at a point O deviated from the center C of the inner region. Their velocity $v_p$ in the CD direction is equal to the difference of two Hubble parameters multiplied by the distance between C and O. It is found also that the velocity $v_d$ corresponding to CMB dipole anisotropy observed at O is by a factor $\\approx 10$ ...

  3. Large Scale Modelling of Glow Discharges or Non - Plasmas

    Science.gov (United States)

    Shankar, Sadasivan

    The Electron Velocity Distribution Function (EVDF) in the cathode fall of a DC helium glow discharge was evaluated from a numerical solution of the Boltzmann Transport Equation(BTE). The numerical technique was based on a Petrov-Galerkin technique and a unique combination of streamline upwinding with self -consistent feedback-based shock-capturing. EVDF for the cathode fall was solved at 1 Torr, as a function of position x, axial velocity v_{rm x}, radial velocity v_{rm r}, and time t. The electron-neutral collisions consisted of elastic, excitation, and ionization processes. The algorithm was optimized and vectorized to speed execution by more than a factor of 10 on CRAY-XMP. Efficient storage schemes were used to save the memory allocation required by the algorithm. The analysis of the solution of BTE was done in terms of the 8-moments that were evaluated. Higher moments were found necessary to study the momentum and energy fluxes. The time and length scales were estimated and used as a basis for the characterization of DC glow discharges. Based on an exhaustive study of Knudsen numbers, it was observed that the electrons in the cathode fall were in the transition or Boltzmann regime. The shortest relaxation time was the momentum relaxation and the longest times were the ionization and energy relaxation times. The other times in the processes were that for plasma reaction, diffusion, convection, transit, entropy relaxation, and that for mean free flight between the collisions. Different models were classified based on the moments, time scales, and length scales in their applicability to glow discharges. These consisted of BTE with different number af phase and configuration dimensions, Bhatnagar-Gross-Krook equation, moment equations (e.g. Drift-Diffusion, Drift-Diffusion-Inertia), and spherical harmonic expansions.

  4. Soil hydrologic characterization for modeling large scale soil remediation protocols

    Science.gov (United States)

    Romano, Nunzio; Palladino, Mario; Di Fiore, Paola; Sica, Benedetto; Speranza, Giuseppe

    2014-05-01

    In Campania Region (Italy), the Ministry of Environment identified a National Interest Priority Sites (NIPS) with a surface of about 200,000 ha, characterized by different levels and sources of pollution. This area, called Litorale Domitio-Agro Aversano includes some polluted agricultural land, belonging to more than 61 municipalities in the Naples and Caserta provinces. In this area, a high level spotted soil contamination is moreover due to the legal and outlaw industrial and municipal wastes dumping, with hazardous consequences also on the quality of the water table. The EU-Life+ project ECOREMED (Implementation of eco-compatible protocols for agricultural soil remediation in Litorale Domizio-Agro Aversano NIPS) has the major aim of defining an operating protocol for agriculture-based bioremediation of contaminated agricultural soils, also including the use of crops extracting pollutants to be used as biomasses for renewable energy production. In the framework of this project, soil hydrologic characterization plays a key role and modeling water flow and solute transport has two main challenging points on which we focus on. A first question is related to the fate of contaminants infiltrated from stormwater runoff and the potential for groundwater contamination. Another question is the quantification of fluxes and spatial extent of root water uptake by the plant species employed to extract pollutants in the uppermost soil horizons. Given the high variability of spatial distribution of pollutants, we use soil characterization at different scales, from field scale when facing root water uptake process, to regional scale when simulating interaction between soil hydrology and groundwater fluxes.

  5. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  6. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  7. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... time, especially with respect to large-scale transport models. The study described in this paper contributes to fill the gap by investigating the effects of uncertainty in socio-economic variables growth rate projections on large-scale transport model forecasts, using the Danish National Transport...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  8. An overview of comparative modelling and resources dedicated to large-scale modelling of genome sequences.

    Science.gov (United States)

    Lam, Su Datt; Das, Sayoni; Sillitoe, Ian; Orengo, Christine

    2017-08-01

    Computational modelling of proteins has been a major catalyst in structural biology. Bioinformatics groups have exploited the repositories of known structures to predict high-quality structural models with high efficiency at low cost. This article provides an overview of comparative modelling, reviews recent developments and describes resources dedicated to large-scale comparative modelling of genome sequences. The value of subclustering protein domain superfamilies to guide the template-selection process is investigated. Some recent cases in which structural modelling has aided experimental work to determine very large macromolecular complexes are also cited.

  9. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  10. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-07-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is: how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that were part of the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation, i.e. drought events became less and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having less and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  11. Large-scale modeling - a tool for conquering the complexity of the brain

    Directory of Open Access Journals (Sweden)

    Mikael Djurfeldt

    2008-04-01

    Full Text Available Is there any hope of achieving a thorough understanding of higher functions such as perception, memory, thought and emotion or is the stunning complexity of the brain a barrier which will limit such efforts for the foreseeable future? In this perspective we discuss methods to handle complexity, approaches to model building, and point to detailed large-scale models as a new contribution to the toolbox of the computational neuroscientist. We elucidate some aspects which distinguishes large-scale models and some of the technological challenges which they entail.

  12. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    NARCIS (Netherlands)

    M.G. de Jong (Martijn); J-B.E.M. Steenkamp (Jan-Benedict)

    2009-01-01

    textabstractWe present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups

  13. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    Science.gov (United States)

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  14. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  15. A Minimal Model for Large-scale Epitaxial Growth Kinetics of Graphene

    CERN Document Server

    Jiang, Huijun

    2015-01-01

    Epitaxial growth via chemical vapor deposition is considered to be the most promising way towards synthesizing large area graphene with high quality. However, it remains a big theoretical challenge to reveal growth kinetics with atomically energetic and large-scale spatial information included. Here, we propose a minimal kinetic Monte Carlo model to address such an issue on an active catalyst surface with graphene/substrate lattice mismatch, which facilitates us to perform large scale simulations of the growth kinetics over two dimensional surface with growth fronts of complex shapes. A geometry-determined large-scale growth mechanism is revealed, where the rate-dominating event is found to be $C_{1}$-attachment for concave growth front segments and $C_{5}$-attachment for others. This growth mechanism leads to an interesting time-resolved growth behavior which is well consistent with that observed in a recent scanning tunneling microscopy experiment.

  16. Testing cosmological models with large-scale power modulation using microwave background polarization observations

    CERN Document Server

    Bunn, Emory F; Zheng, Haoxuan

    2016-01-01

    We examine the degree to which observations of large-scale cosmic microwave background (CMB) polarization can shed light on the puzzling large-scale power modulation in maps of CMB anisotropy. We consider a phenomenological model in which the observed anomaly is caused by modulation of large-scale primordial curvature perturbations, and calculate Fisher information and error forecasts for future polarization data, constrained by the existing CMB anisotropy data. Because a significant fraction of the available information is contained in correlations with the anomalous temperature data, it is essential to account for these constraints. We also present a systematic approach to finding a set of normal modes that maximize the available information, generalizing the well-known Karhunen-Loeve transformation to take account of the constraints from the temperature data. A polarization map covering at least $\\sim 60\\%$ of the sky should be able to provide a $3\\sigma$ detection of modulation at the level favored by the...

  17. Nongray-gas Effects in Modeling of Large-scale Oxy-fuel Combustion Processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    , among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a large-scale utility boiler is numerically investigated...... cases. The simulation results show that the gray and non-gray calculations of the same oxy-fuel WSGGM make distinctly different predictions in the wall radiative heat transfer, incident radiative flux, radiative source, gas temperature and species profiles. In relative to the non-gray implementation...

  18. Modeling Research on Manufacturing Execution System Based on Large-scale System Cybernetics

    Institute of Scientific and Technical Information of China (English)

    WU Yu; XU Xiao-dong; LI Cong-xin

    2008-01-01

    A cybernetics model of manufacturing execution system (MES_CM) was proposed and studied from the viewpoint of cybernetics.Combining with the features of manufacturing system,the MES_CM was modeled by "generalized modeling" method that is discussed in large-scale system theory.The mathematical model of MES_CM was constructed by the generalized operator model,and the main characteristics of MES_CM were analyzed.

  19. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  20. PLATO: data-oriented approach to collaborative large-scale brain system modeling.

    Science.gov (United States)

    Kannon, Takayuki; Inagaki, Keiichiro; Kamiji, Nilton L; Makimura, Kouji; Usui, Shiro

    2011-11-01

    The brain is a complex information processing system, which can be divided into sub-systems, such as the sensory organs, functional areas in the cortex, and motor control systems. In this sense, most of the mathematical models developed in the field of neuroscience have mainly targeted a specific sub-system. In order to understand the details of the brain as a whole, such sub-system models need to be integrated toward the development of a neurophysiologically plausible large-scale system model. In the present work, we propose a model integration library where models can be connected by means of a common data format. Here, the common data format should be portable so that models written in any programming language, computer architecture, and operating system can be connected. Moreover, the library should be simple so that models can be adapted to use the common data format without requiring any detailed knowledge on its use. Using this library, we have successfully connected existing models reproducing certain features of the visual system, toward the development of a large-scale visual system model. This library will enable users to reuse and integrate existing and newly developed models toward the development and simulation of a large-scale brain system model. The resulting model can also be executed on high performance computers using Message Passing Interface (MPI).

  1. Large-Scale Forest Modeling: Deducing Stand Density from Inventory Data

    Directory of Open Access Journals (Sweden)

    Oskar Franklin

    2012-01-01

    Full Text Available While effects of thinning and natural disturbances on stand density play a central role for forest growth, their representation in large-scale studies is restricted by both model and data availability. Here a forest growth model was combined with a newly developed generic thinning model to estimate stand density and site productivity based on widely available inventory data (tree species, age class, volume, and increment. The combined model successfully coupled biomass, increment, and stand closure (=stand density/self-thinning limited stand density, as indicated by cross-validation against European-wide inventory data. The improvement in model performance attained by including variable stand closure among age cohorts compared to a fixed closure suggests that stand closure is an important parameter for accurate forest growth modeling also at large scales.

  2. Formulation of Subgrid Variability and Boundary-Layer Cloud Cover in Large-Scale Models

    Science.gov (United States)

    2007-11-02

    soils have been specifically evaluated in terms of a van Genuchten formulation. The CAPS model was originally formulated for inclusion in large...terrestrial atmospheric boundary lay- ers, suitable for inclusion in large-scale models. The ABL mixing scheme (Troen and Mahrt, 1986) includes both...AFGL soil sodel (OSU-PL land-surface scheme) coupled to a boundary layer model developed by Jan Paegle, Univ. Utah. Ciudad Universitaria Pabellon 2

  3. "GRAY-BOX" MODELING METHOD AND PARAMETERS IDENTIFICATION FOR LARGE-SCALE HYDRAULIC SYSTEM

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Modeling and digital simulation is an effective method to analyze the dynamic characteristics of hydraulic system. It is difficult to determine some performance parameters in the hydraulic system by means of currently used modeling methods. The "gray-box" modeling method for large-scale hydraulic system is introduced. The principle of the method, the submodels of some components and the parameters identification of components or subsystem are researched.

  4. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  5. Validating the Runoff from the PRECIS Model Using a Large-Scale Routing Model

    Institute of Scientific and Technical Information of China (English)

    CAO Lijuan; DONG Wenjie; XU Yinlong; ZHANG Yong; Michael SPARROW

    2007-01-01

    The streamflow over the Yellow River basin is simulated using the PRECIS (Providing REgional Climates for Impacts Studies) regional climate model driven by 15-year (1979-1993) ECMWF reanalysis data as the initial and lateral boundary conditions and an off-line large-scale routing model (LRM). The LRM uses physical catchment and river channel information and allows streamflow to be predicted for large continental rivers with a 1°× 1° spatial resolution. The results show that the PRECIS model can reproduce the general southeast to northwest gradient distribution of the precipitation over the Yellow River basin. The PRECISLRM model combination has the capability to simulate the seasonal and annual streamflow over the Yellow River basin. The simulated streamflow is generally coincident with the naturalized streamflow both in timing and in magnitude.

  6. Towards a self-consistent halo model for the nonlinear large-scale structure

    CERN Document Server

    Schmidt, Fabian

    2015-01-01

    The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: $(i)$ they do not enforce the stress-energy conservation of matter; $(ii)$ they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model ("EHM") that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed, and results of perturbation theory and the effective field theory can in principle be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written he...

  7. Systems Execution Modeling Technologies for Large-Scale Net-Centric Department of Defense Systems

    Science.gov (United States)

    2011-12-01

    represents an indivisible unit of functionality, such as an EJB or CORBA component. A configuration is a valid composition of Features that produces a...Component-based middleware, such as the Lightweight CORBA Component Model, are increasingly used to implement large-scale distributed, real-time and...development, packaging, and deployment frameworks for a wide range of component middleware. Although originally developed for the CORBA Component Model

  8. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  9. Modeling the large-scale redshift-space 3-point correlation function of galaxies

    CERN Document Server

    Slepian, Zachary

    2016-01-01

    We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the Baryon Acoustic Oscillation (BAO) method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted $\\Omega_{\\rm m}$ and bias values, the rescaling is a factor of $\\sim 1.8$. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.

  10. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... are converted to discharge using rating curves of simulated discharge versus observed altimetry. This approach makes it possible to use altimetry data from river cross sections where both in-situ rating curves and accurate river cross section geometry are not available. Model updating based on radar altimetry...

  11. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    CERN Document Server

    Fonseca, Ricardo A; Fiúza, Frederico; Davidson, Asher; Tsung, Frank S; Mori, Warren B; Silva, Luís O

    2013-01-01

    A new generation of laser wakefield accelerators, supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modeling for further understanding of the underlying physics and identification of optimal regimes, but large scale modeling of these scenarios is computationally heavy and requires efficient use of state-of-the-art Petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed / shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modeling of LWFA, demonstrating speedups of over 1 order of magni...

  12. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  13. A Practical Ontology for the Large-Scale Modeling of Scholarly Artifacts and their Usage

    CERN Document Server

    Rodriguez, Marko A; Van de Sompel, Herbert

    2007-01-01

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  14. A Data-Driven Analytic Model for Proton Acceleration by Large-Scale Solar Coronal Shocks

    CERN Document Server

    Kozarev, Kamen A

    2016-01-01

    We have recently studied the development of an eruptive filament-driven, large-scale off-limb coronal bright front (OCBF) in the low solar corona (Kozarev et al. 2015), using remote observations from Solar Dynamics Observatory's Advanced Imaging Assembly EUV telescopes. In that study, we obtained high-temporal resolution estimates of the OCBF parameters regulating the efficiency of charged particle acceleration within the theoretical framework of diffusive shock acceleration (DSA). These parameters include the time-dependent front size, speed, and strength, as well as the upstream coronal magnetic field orientations with respect to the front's surface normal direction. Here we present an analytical particle acceleration model, specifically developed to incorporate the coronal shock/compressive front properties described above, derived from remote observations. We verify the model's performance through a grid of idealized case runs using input parameters typical for large-scale coronal shocks, and demonstrate ...

  15. Dynamic model of frequency control in Danish power system with large scale integration of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2013-01-01

    power system model with large scale of wind power is developed and a case study for an inaccurate wind power forecast is investigated. The goal of this work is to develop an adequate power system model that depicts relevant dynamic features of the power plants and compensates for load generation......This work evaluates the impact of large scale integration of wind power in future power systems when 50% of load demand can be met from wind power. The focus is on active power balance control, where the main source of power imbalance is an inaccurate wind speed forecast. In this study, a Danish...... imbalances, caused by inaccurate wind speed forecast, by an appropriate control of the active power production from power plants....

  16. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    Energy Technology Data Exchange (ETDEWEB)

    RODRIGUEZ, MARKO A. [Los Alamos National Laboratory; BOLLEN, JOHAN [Los Alamos National Laboratory; VAN DE SOMPEL, HERBERT [Los Alamos National Laboratory

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  17. Violent wave impacts on vertical and inclined walls: Large scale model tests

    DEFF Research Database (Denmark)

    Obhrai, C.; Bullock, G.; Wolters, G.

    2005-01-01

    New data is presented from large scale model tests where combined measurements of wave pressure and aeration have been made on the front of a vertical and an inclined wall. The shape of the breaking wave was found to have a significant effect on the distribution of the wave impact pressures...... on the wall. The characteristics of violent wave impacts are discussed and related to the impulse on the structure....

  18. Towards a model of large scale dynamics in transitional wall-bounded flows

    CERN Document Server

    Manneville, Paul

    2015-01-01

    A system of simplified equations is proposed to govern the feedback interactions of large-scale flows present in laminar-turbulent patterns of transitional wall-bounded flows, with small-scale Reynolds stresses generated by the self-sustainment process of turbulence itself modeled using an extension of Waleffe's approach (Phys. Fluids 9 (1997) 883-900), the detailed expression of which is displayed as an annex to the main text.

  19. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    Science.gov (United States)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of

  20. Reionization on Large Scales I: A Parametric Model Constructed from Radiation-Hydrodynamic Simulations

    CERN Document Server

    Battaglia, Nick; Cen, Renyue; Loeb, Abraham

    2012-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048^3 dark matter particles, 2048^3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc/h box, we show that the density and reionization-redshift fields are highly correlated on large scales (>~ 1 Mpc/h). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization-redshift field. The parametric model has three free parameters which can be reduced to one free parameter when we fit the two bias parameters to simulations results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionizati...

  1. Modeling of a Large-Scale High Temperature Regenerative Sulfur Removal Process

    DEFF Research Database (Denmark)

    Konttinen, Jukka T.; Johnsson, Jan Erik

    1999-01-01

    -up. Steady-state kinetic reactor models are needed for reactor sizing, and dynamic models can be used for process control design and operator training. The regenerative sulfur removal process to be studied in this paper consists of two side-by-side fluidized bed reactors operating at temperatures of 400......-650°C and at elevated pressure. In this paper, hydrodynamic modeling equations for dense fluidized bed and freeboard are applied for the prediction of the performance of a large-scale regeneration reactor. These equations can partly explain the differences in modeling results observed with a simpler...

  2. Understanding dynamics of large-scale atmospheric vortices with moist-convective shallow water model

    Science.gov (United States)

    Rostami, M.; Zeitlin, V.

    2016-08-01

    Atmospheric jets and vortices which, together with inertia-gravity waves, constitute the principal dynamical entities of large-scale atmospheric motions, are well described in the framework of one- or multi-layer rotating shallow water models, which are obtained by vertically averaging of full “primitive” equations. There is a simple and physically consistent way to include moist convection in these models by adding a relaxational parameterization of precipitation and coupling precipitation with convective fluxes with the help of moist enthalpy conservation. We recall the construction of moist-convective rotating shallow water model (mcRSW) model and give an example of application to upper-layer atmospheric vortices.

  3. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  4. Large-scale structure of the Universe in unstable dark matter models

    Energy Technology Data Exchange (ETDEWEB)

    Doroshkevich, A.G.; Khlopov, M.U. (AN SSSR, Moscow (USSR). Inst. Prikladnoj Matematiki); Klypin, A.A. (Space Research Inst., Moscow (USSR))

    1989-08-15

    We discuss the formation and evolution of the large-scale structure in unstable dark matter (UDM) models. The main feature of the models is that galaxy formation starts after decays. We found reasonable agreement with the observed picture for models with mass of decaying particles 60-90 eV and decay time (0.3-1.5) x 10{sup 9} yr. Galaxy formation in UDM models starts at z = 3 if products of decays are relativistic at present or at least at z = 6-7 if the products are non-relativistic. (author).

  5. Modeling dynamic functional information flows on large-scale brain networks.

    Science.gov (United States)

    Lv, Peili; Guo, Lei; Hu, Xintao; Li, Xiang; Jin, Changfeng; Han, Junwei; Li, Lingjiang; Liu, Tianming

    2013-01-01

    Growing evidence from the functional neuroimaging field suggests that human brain functions are realized via dynamic functional interactions on large-scale structural networks. Even in resting state, functional brain networks exhibit remarkable temporal dynamics. However, it has been rarely explored to computationally model such dynamic functional information flows on large-scale brain networks. In this paper, we present a novel computational framework to explore this problem using multimodal resting state fMRI (R-fMRI) and diffusion tensor imaging (DTI) data. Basically, recent literature reports including our own studies have demonstrated that the resting state brain networks dynamically undergo a set of distinct brain states. Within each quasi-stable state, functional information flows from one set of structural brain nodes to other sets of nodes, which is analogous to the message package routing on the Internet from the source node to the destination. Therefore, based on the large-scale structural brain networks constructed from DTI data, we employ a dynamic programming strategy to infer functional information transition routines on structural networks, based on which hub routers that most frequently participate in these routines are identified. It is interesting that a majority of those hub routers are located within the default mode network (DMN), revealing a possible mechanism of the critical functional hub roles played by the DMN in resting state. Also, application of this framework on a post trauma stress disorder (PTSD) dataset demonstrated interesting difference in hub router distributions between PTSD patients and healthy controls.

  6. Examining item-position effects in large-scale assessment using the Linear Logistic Test Model

    Directory of Open Access Journals (Sweden)

    CHRISTINE HOHENSINN

    2008-09-01

    Full Text Available When administering large-scale assessments, item-position effects are of particular importance because the applied test designs very often contain several test booklets with the same items presented at different test positions. Establishing such position effects would be most critical; it would mean that the estimated item parameters do not depend exclusively on the items’ difficulties due to content but also on their presentation positions. As a consequence, item calibration would be biased. By means of the linear logistic test model (LLTM, item-position effects can be tested. In this paper, the results of a simulation study demonstrating how LLTM is indeed able to detect certain position effects in the framework of a large-scale assessment are presented first. Second, empirical item-position effects of a specific large-scale competence assessment in mathematics (4th grade students are analyzed using the LLTM. The results indicate that a small fatigue effect seems to take place. The most important consequence of the given paper is that it is advisable to try pertinent simulation studies before an analysis of empirical data takes place; the reason is, that for the given example, the suggested Likelihood-Ratio test neither holds the nominal type-I-risk, nor qualifies as “robust”, and furthermore occasionally shows very low power.

  7. Large-scale Ice Discharge Events in a Pure Ice Sheet Model

    Science.gov (United States)

    Alverson, K.; Legrand, P.; Papa, B. D.; Mysak, L. A.; Wang, Z.

    2004-05-01

    Sediment cores in the North Atlantic show evidence of periodic large-scale ice discharge events between 60 ka and 10 ka BP. These events occurred with a typical period between 5 kyr and 10 kyr. During each event, a significant amount of ice was discharged from the Hudson Bay region through the Hudson Strait and into the North Atlantic. This input of freshwater through the melting of icebergs is thought to have strongly affected the Atlantic thermohaline circulation. One theory is that these periodic ice discharge events represent an internal oscillation of the ice sheet under constant forcing. A second theory requires some variable external forcing on an unstable ice sheet to produce a discharge event. Using the ice sheet model of Marshall, an attempt is made to simulate periodic large-scale ice discharge events within the framework of the first theory. In this case, ice sheet surges and large-scale discharge events occur as a free oscillation of the ice sheet. An analysis of the activation of ice surge events and the thermodynamic controls on these events is also made.

  8. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  9. A cooperative strategy for parameter estimation in large scale systems biology models

    Directory of Open Access Journals (Sweden)

    Villaverde Alejandro F

    2012-06-01

    Full Text Available Abstract Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS, is presented. Its key feature is the cooperation between different programs (“threads” that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS. Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here

  10. Can limited area NWP and/or RCM models improve on large scales inside their domain?

    Science.gov (United States)

    Mesinger, Fedor; Veljovic, Katarina

    2017-04-01

    In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales

  11. A double-step truncation procedure for large-scale shell-model calculations

    CERN Document Server

    Coraggio, L; Itaco, N

    2016-01-01

    We present a procedure that is helpful to reduce the computational complexity of large-scale shell-model calculations, by preserving as much as possible the role of the rejected degrees of freedom in an effective approach. Our truncation is driven first by the analysis of the effective single-particle energies of the original large-scale shell-model hamiltonian, so to locate the relevant degrees of freedom to describe a class of isotopes or isotones, namely the single-particle orbitals that will constitute a new truncated model space. The second step is to perform an unitary transformation of the original hamiltonian from its model space into the truncated one. This transformation generates a new shell-model hamiltonian, defined in a smaller model space, that retains effectively the role of the excluded single-particle orbitals. As an application of this procedure, we have chosen a realistic shell-model hamiltonian defined in a large model space, set up by seven and five proton and neutron single-particle orb...

  12. The restricted stochastic user equilibrium with threshold model: Large-scale application and parameter testing

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Nielsen, Otto Anker; Watling, David P.

    2017-01-01

    -pairs, and comparisons are performed with respect to a previously proposed RSUE model as well as an existing link-based mixed Multinomial Probit (MNP) SUE model. The results show that the RSUET has very attractive computation times for large-scale applications and demonstrate that the threshold addition to the RSUE...... highlight that the RSUET outperforms the MNP SUE in terms of convergence, calculation time and behavioural realism. The choice set composition is validated by using 16,618 observed route choices collected by GPS devices in the same network and observing their reproduction within the equilibrated choice sets...

  13. Model-based plant-wide optimization of large-scale lignocellulosic bioethanol plants

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Blanke, Mogens; Jakobsen, Jon Geest

    2017-01-01

    with respect to maximum economic profit of a large scale biorefinery plant using a systematic model-based plantwide optimization methodology. The following key process parameters are identified as decision variables: pretreatment temperature, enzyme dosage in enzymatic hydrolysis, and yeast loading per batch...... in fermentation. The plant is treated in an integrated manner taking into account the interactions and trade-offs between the conversion steps. A sensitivity and uncertainty analysis follows at the optimal solution considering both model and feed parameters. It is found that the optimal point is more sensitive...

  14. Advances in the study of uncertainty quantification of large-scale hydrological modeling system

    Institute of Scientific and Technical Information of China (English)

    SONG Xiaomeng; ZHAN Chesheng; KONG Fanzhe; XIA Jun

    2011-01-01

    The regional hydrological system is extremely complex because it is affected not only by physical factors but also by human dimensions.And the hydrological models play a very important role in simulating the complex system.However,there have not been effective methods for the model reliability and uncertainty analysis due to its complexity and difficulty.The uncertainties in hydrological modeling come from four important aspects:uncertainties in input data and parameters,uncertainties in model structure,uncertainties in analysis method and the initial and boundary conditions.This paper systematically reviewed the recent advances in the study of the uncertainty analysis approaches in the large-scale complex hydrological model on the basis of uncertainty sources.Also,the shortcomings and insufficiencies in the uncertainty analysis for complex hydrological models are pointed out.And then a new uncertainty quantification platform PSUADE and its uncertainty quantification methods were introduced,which will be a powerful tool and platform for uncertainty analysis of large-scale complex hydrological models.Finally,some future perspectives on uncertainty quantification are put forward.

  15. Formation and disruption of tonotopy in a large-scale model of the auditory cortex.

    Science.gov (United States)

    Tomková, Markéta; Tomek, Jakub; Novák, Ondřej; Zelenka, Ondřej; Syka, Josef; Brom, Cyril

    2015-10-01

    There is ample experimental evidence describing changes of tonotopic organisation in the auditory cortex due to environmental factors. In order to uncover the underlying mechanisms, we designed a large-scale computational model of the auditory cortex. The model has up to 100 000 Izhikevich's spiking neurons of 17 different types, almost 21 million synapses, which are evolved according to Spike-Timing-Dependent Plasticity (STDP) and have an architecture akin to existing observations. Validation of the model revealed alternating synchronised/desynchronised states and different modes of oscillatory activity. We provide insight into these phenomena via analysing the activity of neuronal subtypes and testing different causal interventions into the simulation. Our model is able to produce experimental predictions on a cell type basis. To study the influence of environmental factors on the tonotopy, different types of auditory stimulations during the evolution of the network were modelled and compared. We found that strong white noise resulted in completely disrupted tonotopy, which is consistent with in vivo experimental observations. Stimulation with pure tones or spontaneous activity led to a similar degree of tonotopy as in the initial state of the network. Interestingly, weak white noise led to a substantial increase in tonotopy. As the STDP was the only mechanism of plasticity in our model, our results suggest that STDP is a sufficient condition for the emergence and disruption of tonotopy under various types of stimuli. The presented large-scale model of the auditory cortex and the core simulator, SUSNOIMAC, have been made publicly available.

  16. Distributed Modeling and Control of Large-Scale Highly Flexible Solar-Powered UAV

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2015-01-01

    Full Text Available The modeling, stability, and control characteristics of a large scale highly flexible solar-powered UAV with distributed all-span multielevons were presented. A geometrically nonlinear intrinsic beam model was introduced to establish the structural/flight dynamics coupled equation of motion (EOM; based on it, the explicit decoupled linear flight dynamics and structural dynamics EOM were derived through mean axis theory. Undeformed, deformed, and flexible models were compared through trimming and modal analysis. Since the deformation of wing has increased the UAV’s moment of inertia about the pitch axis, the frequency of short period mode has obviously decreased for the deformed model. The serious coupling between short period mode and 1st bending mode also significantly influences the roots of short period mode of flexible model. So flexible model was the only one which is able to accurately estimate the flight dynamics behaviors and was selected as the later control model. Forty distributed elevons and LQG/LTR controller were employed to control the attitude and suppress the aeroelastic deformation of the UAV simultaneously. The dynamics performance, robustness, and simulation results show that they were suitable for large scale highly flexible solar-powered UAV.

  17. A large scale hydrological model combining Budyko hypothesis and stochastic soil moisture model

    Science.gov (United States)

    Cong, Z.; Zhang, X.

    2012-04-01

    Based on the Budyko hypothesis, the actual evapotranspiration, E,is controlled by the water conditions and the energy conditions, which are represented by the amount of annual precipitation, P and potential evaporation, E0, respectively. Some theoretical or empirical equations have been proposed to represent the Budyko curve. We here select Choudhury's equation to describe the Budyko curve (Mezentsev, 1954; Choudhury, 1999; Yang et al., 2008; Roderick and Farquhar, 2011). ɛ = (1+ φ -α)-1/α ,ɛ = E-,φ = E0 P P Rodriguez-Iturbe et al. (1999) proposed a stochastic soil moisture model based on a Poisson distributed rainfall assumption. Porporato et al. (2004) described the average water balance based on stochastic soil moisture model as following, γ- 1 ɛ = 1 -φ(·γ)φ--(·e-γ),γ = Zr- Γ γ- - Γ γ-,γ h φ φ where, h means the average rainfall depth, Zr means basin water storage ability. Combining these two equation, we can get the relation between α and γ. Then we develop a large scale hydrological model to estimate annual runoff from P, E0, h and Zr. ( -α)- 1/α 0.5946 Zr- R = (1- ɛ)P,ɛ = 1+ φ ,a = 0.7078γ ,γ = h This method has good performance when it is applied to estimate annual runoff in the Yellow River Basin and the Yangtze River Basin. The impacts of climate changes (P, E0 and h) and human activities (Zr) are also discussed with this method.

  18. Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn

    Science.gov (United States)

    Gargano, A.; Coraggio, L.; Itaco, N.

    2017-09-01

    This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.

  19. A new mixed-mode fracture criterion for large scale lattice models

    Directory of Open Access Journals (Sweden)

    T. Sachau

    2013-08-01

    Full Text Available Reasonable fracture criteria are crucial for the modeling of dynamic failure in computational spring lattice models. For experiments on the micro and on the meso scale exist successful criteria, which are based on the stress that a spring experiences. In this paper we test the applicability of these failure criteria to large scale models, where gravity plays an important role in addition to the externally applied deformation. The resulting brittle structures do not resemble the outcome predicted by fracture mechanics and geological observations. For this reason we derive an elliptical fracture criterion, which is based on the strain energy stored in a spring. Simulations using the new criterion result in realistic structures. It is another great advantage of this fracture model, that it can be combined with classic geological material parameters: the tensile strength σ0 and the shear cohesion τ0. While we tested the fracture model only for large scale structures, there is strong reason to believe that the model is equally applicable to lattice simulations on the micro and the meso scale.

  20. Coupled climate model simulations of Mediterranean winter cyclones and large-scale flow patterns

    Directory of Open Access Journals (Sweden)

    B. Ziv

    2013-03-01

    Full Text Available The study aims to evaluate the ability of global, coupled climate models to reproduce the synoptic regime of the Mediterranean Basin. The output of simulations of the 9 models included in the IPCC CMIP3 effort is compared to the NCEP-NCAR reanalyzed data for the period 1961–1990. The study examined the spatial distribution of cyclone occurrence, the mean Mediterranean upper- and lower-level troughs, the inter-annual variation and trend in the occurrence of the Mediterranean cyclones, and the main large-scale circulation patterns, represented by rotated EOFs of 500 hPa and sea level pressure. The models reproduce successfully the two maxima in cyclone density in the Mediterranean and their locations, the location of the average upper- and lower-level troughs, the relative inter-annual variation in cyclone occurrences and the structure of the four leading large scale EOFs. The main discrepancy is the models' underestimation of the cyclone density in the Mediterranean, especially in its western part. The models' skill in reproducing the cyclone distribution is found correlated with their spatial resolution, especially in the vertical. The current improvement in model spatial resolution suggests that their ability to reproduce the Mediterranean cyclones would be improved as well.

  1. Forcings and Feedbacks on Convection in the 2010 Pakistan Flood: Modeling Extreme Precipitation with Interactive Large-Scale Ascent

    CERN Document Server

    Nie, Ji; Sobel, Adam H

    2016-01-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent and large latent heat release. The causal relationships between these factors are often not obvious, however, and the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here, we examine the large-scale forcings and convective heating feedback in the precipitation events which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with the large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation with input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic li...

  2. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  3. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    Science.gov (United States)

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-09-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.

  4. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  5. A model for red blood cells in simulations of large-scale blood flows

    CERN Document Server

    Melchionna, Simone

    2011-01-01

    Red blood cells (RBCs) are an essential component of blood. A method to include the particulate nature of blood is introduced here with the goal of studying circulation in large-scale realistic vessels. The method uses a combination of the Lattice Boltzmann method (LBM) to account for the plasma motion, and a modified Molecular Dynamics scheme for the cellular motion. Numerical results illustrate the quality of the model in reproducing known rheological properties of blood as much as revealing the effect of RBC structuring on the wall shear stress, with consequences on the development of cardiovascular diseases.

  6. Scheduling of power generation a large-scale mixed-variable model

    CERN Document Server

    Prékopa, András; Strazicky, Beáta; Deák, István; Hoffer, János; Németh, Ágoston; Potecz, Béla

    2014-01-01

    The book contains description of a real life application of modern mathematical optimization tools in an important problem solution for power networks. The objective is the modelling and calculation of optimal daily scheduling of power generation, by thermal power plants,  to satisfy all demands at minimum cost, in such a way that the  generation and transmission capacities as well as the demands at the nodes of the system appear in an integrated form. The physical parameters of the network are also taken into account. The obtained large-scale mixed variable problem is relaxed in a smart, practical way, to allow for fast numerical solution of the problem.

  7. Operation Modeling of Power Systems Integrated with Large-Scale New Energy Power Sources

    Directory of Open Access Journals (Sweden)

    Hui Li

    2016-10-01

    Full Text Available In the most current methods of probabilistic power system production simulation, the output characteristics of new energy power generation (NEPG has not been comprehensively considered. In this paper, the power output characteristics of wind power generation and photovoltaic power generation are firstly analyzed based on statistical methods according to their historical operating data. Then the characteristic indexes and the filtering principle of the NEPG historical output scenarios are introduced with the confidence level, and the calculation model of NEPG’s credible capacity is proposed. Based on this, taking the minimum production costs or the best energy-saving and emission-reduction effect as the optimization objective, the power system operation model with large-scale integration of new energy power generation (NEPG is established considering the power balance, the electricity balance and the peak balance. Besides, the constraints of the operating characteristics of different power generation types, the maintenance schedule, the load reservation, the emergency reservation, the water abandonment and the transmitting capacity between different areas are also considered. With the proposed power system operation model, the operation simulations are carried out based on the actual Northwest power grid of China, which resolves the new energy power accommodations considering different system operating conditions. The simulation results well verify the validity of the proposed power system operation model in the accommodation analysis for the power system which is penetrated with large scale NEPG.

  8. Reducing biases in regional climate downscaling by applying Bayesian model averaging on large-scale forcing

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hongwei [APEC Climate Center, Busan (Korea, Republic of); Wang, Bin [University of Hawaii at Manoa, Department of Meteorology, Honolulu, HI (United States); University of Hawaii at Manoa, International Pacific Research Center, Honolulu, HI (United States); Wang, Bin [Chinese Academy of Sciences, LASG, Institute of Atmospheric Physics, Beijing (China)

    2012-11-15

    Reduction of uncertainty in large-scale lateral-boundary forcing in regional climate modeling is a critical issue for improving the performance of regional climate downscaling. Numerical simulations of 1998 East Asian summer monsoon were conducted using the Weather Research and Forecast model forced by four different reanalysis datasets, their equal-weight ensemble, and Bayesian model averaging (BMA) ensemble means. Large discrepancies were found among experiments forced by the four individual reanalysis datasets mainly due to the uncertainties in the moisture field of large-scale forcing over ocean. We used satellite water-vapor-path data as observed truth-and-training data to determine the posterior probability (weight) for each forcing dataset using the BMA method. The experiment forced by the equal-weight ensemble reduced the circulation biases significantly but reduced the precipitation biases only moderately. However, the experiment forced by the BMA ensemble outperformed not only the experiments forced by individual reanalysis datasets but also the equal-weight ensemble experiment in simulating the seasonal mean circulation and precipitation. These results suggest that the BMA ensemble method is an effective method for reducing the uncertainties in lateral-boundary forcing and improving model performance in regional climate downscaling. (orig.)

  9. Reconstruction of large-scale gene regulatory networks using Bayesian model averaging.

    Science.gov (United States)

    Kim, Haseong; Gelenbe, Erol

    2012-09-01

    Gene regulatory networks provide the systematic view of molecular interactions in a complex living system. However, constructing large-scale gene regulatory networks is one of the most challenging problems in systems biology. Also large burst sets of biological data require a proper integration technique for reliable gene regulatory network construction. Here we present a new reverse engineering approach based on Bayesian model averaging which attempts to combine all the appropriate models describing interactions among genes. This Bayesian approach with a prior based on the Gibbs distribution provides an efficient means to integrate multiple sources of biological data. In a simulation study with maximum of 2000 genes, our method shows better sensitivity than previous elastic-net and Gaussian graphical models, with a fixed specificity of 0.99. The study also shows that the proposed method outperforms the other standard methods for a DREAM dataset generated by nonlinear stochastic models. In brain tumor data analysis, three large-scale networks consisting of 4422 genes were built using the gene expression of non-tumor, low and high grade tumor mRNA expression samples, along with DNA-protein binding affinity information. We found that genes having a large variation of degree distribution among the three tumor networks are the ones that see most involved in regulatory and developmental processes, which possibly gives a novel insight concerning conventional differentially expressed gene analysis.

  10. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  11. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen

    2007-01-01

    and experiments are both done for the grate boiler. The comparison between them shows an overall acceptable agreement in tendency. However at some measuring ports, big discrepancies between the modeling and the experiments are observed, mainly because the modeling-based boundary conditions (BCs) could differ...... is exposed to preheated inlet air while the top of the bed resides within the furnace. Mathematical modeling is an efficient way to understand and improve the operation and design of combustion systems. Compared to modeling of pulverized fuel furnaces, CFD modeling of biomass-fired grate furnaces...... is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...

  12. Functional models for large-scale gene regulation networks: realism and fiction.

    Science.gov (United States)

    Lagomarsino, Marco Cosentino; Bassetti, Bruno; Castellani, Gastone; Remondini, Daniel

    2009-04-01

    High-throughput experiments are shedding light on the topology of large regulatory networks and at the same time their functional states, namely the states of activation of the nodes (for example transcript or protein levels) in different conditions, times, environments. We now possess a certain amount of information about these two levels of description, stored in libraries, databases and ontologies. A current challenge is to bridge the gap between topology and function, i.e. developing quantitative models aimed at characterizing the expression patterns of large sets of genes. However, approaches that work well for small networks become impossible to master at large scales, mainly because parameters proliferate. In this review we discuss the state of the art of large-scale functional network models, addressing the issue of what can be considered as "realistic" and what the main limitations may be. We also show some directions for future work, trying to set the goals that future models should try to achieve. Finally, we will emphasize the possible benefits in the understanding of biological mechanisms underlying complex multifactorial diseases, and in the development of novel strategies for the description and the treatment of such pathologies.

  13. An assembly model for simulation of large-scale ground water flow and transport.

    Science.gov (United States)

    Huang, Junqi; Christ, John A; Goltz, Mark N

    2008-01-01

    When managing large-scale ground water contamination problems, it is often necessary to model flow and transport using finely discretized domains--for instance (1) to simulate flow and transport near a contamination source area or in the area where a remediation technology is being implemented; (2) to account for small-scale heterogeneities; (3) to represent ground water-surface water interactions; or (4) some combination of these scenarios. A model with a large domain and fine-grid resolution will need extensive computing resources. In this work, a domain decomposition-based assembly model implemented in a parallel computing environment is developed, which will allow efficient simulation of large-scale ground water flow and transport problems using domain-wide grid refinement. The method employs common ground water flow (MODFLOW) and transport (RT3D) simulators, enabling the solution of almost all commonly encountered ground water flow and transport problems. The basic approach partitions a large model domain into any number of subdomains. Parallel processors are used to solve the model equations within each subdomain. Schwarz iteration is applied to match the flow solution at the subdomain boundaries. For the transport model, an extended numerical array is implemented to permit the exchange of dispersive and advective flux information across subdomain boundaries. The model is verified using a conventional single-domain model. Model simulations demonstrate that the proposed model operated in a parallel computing environment can result in considerable savings in computer run times (between 50% and 80%) compared with conventional modeling approaches and may be used to simulate grid discretizations that were formerly intractable.

  14. Stochastic representation of the Reynolds transport theorem: revisiting large-scale modeling

    CERN Document Server

    Harouna, S Kadri

    2016-01-01

    We explore the potential of a formulation of the Navier-Stokes equations incorporating a random description of the small-scale velocity component. This model, established from a version of the Reynolds transport theorem adapted to a stochastic representation of the flow, gives rise to a large-scale description of the flow dynamics in which emerges an anisotropic subgrid tensor, reminiscent to the Reynolds stress tensor, together with a drift correction due to an inhomogeneous turbulence. The corresponding subgrid model, which depends on the small scales velocity variance, generalizes the Boussinesq eddy viscosity assumption. However, it is not anymore obtained from an analogy with molecular dissipation but ensues rigorously from the random modeling of the flow. This principle allows us to propose several subgrid models defined directly on the resolved flow component. We assess and compare numerically those models on a standard Green-Taylor vortex flow at Reynolds 1600. The numerical simulations, carried out w...

  15. Unified Tractable Model for Large-Scale Networks Using Stochastic Geometry: Analysis and Design

    KAUST Repository

    Afify, Laila H.

    2016-12-01

    The ever-growing demands for wireless technologies necessitate the evolution of next generation wireless networks that fulfill the diverse wireless users requirements. However, upscaling existing wireless networks implies upscaling an intrinsic component in the wireless domain; the aggregate network interference. Being the main performance limiting factor, it becomes crucial to develop a rigorous analytical framework to accurately characterize the out-of-cell interference, to reap the benefits of emerging networks. Due to the different network setups and key performance indicators, it is essential to conduct a comprehensive study that unifies the various network configurations together with the different tangible performance metrics. In that regard, the focus of this thesis is to present a unified mathematical paradigm, based on Stochastic Geometry, for large-scale networks with different antenna/network configurations. By exploiting such a unified study, we propose an efficient automated network design strategy to satisfy the desired network objectives. First, this thesis studies the exact aggregate network interference characterization, by accounting for each of the interferers signals in the large-scale network. Second, we show that the information about the interferers symbols can be approximated via the Gaussian signaling approach. The developed mathematical model presents twofold analysis unification for uplink and downlink cellular networks literature. It aligns the tangible decoding error probability analysis with the abstract outage probability and ergodic rate analysis. Furthermore, it unifies the analysis for different antenna configurations, i.e., various multiple-input multiple-output (MIMO) systems. Accordingly, we propose a novel reliable network design strategy that is capable of appropriately adjusting the network parameters to meet desired design criteria. In addition, we discuss the diversity-multiplexing tradeoffs imposed by differently favored

  16. 3D forward modeling and inversion of large scale CSEM method

    Science.gov (United States)

    Fu, C.; Di, Q.; Xu, C.

    2012-12-01

    MT and CSAMT methods have been widely applied in many areas such as coal, mineral, geothermal and engineering exploration, are very useful exploration methods, but they still have some problems. So an electromagnetic (EM) method using a fixed large power source, such as a long bipole current source, is beginning to take shape. In this method, the distance between receiver and source may reach thousands of kilometers, so the effect of ionosphere on the EM fields should be considered when we doing the modeling or inversion. The integral equation (IE) method is a reliable way to do the 3D forward modeling and inversion for 3D models. We have deduced a 3D IE method that can effectively modeling the large scale CSEM method which including ionosphere's effect. In order to find the characteristics of EM fields and the exploration ability for this large scale CSEM method, we build a typical 3D mineral model, and then make the forward modeling and inversion by using of 3D IE method. There are mainly two stratums in the forwarding modeling model, the upper layer is limestone, the resistivity is 2000 ohm.m, the lower layer is granite, the resistivity is 5000 ohm.m, and there are some granite in the upper layer as intrusive rock. Two ore bodies are at contact zone of the two rocks, the resistivity are 100 ohm.m and 200 ohm.m, respectively. The forward modeling results showed that, because the effect of ionosphere, the EM field is not very weak although the distance is very large between receiver and source. We can see some obscure low resistivity zone in the pseudo section map. After the inversion, the two ore bodies are clearly being showed, and we also can find the intrusive mass which corresponds to the original model. The results showed that our 3D IE forward and inversion code is reliable, and the large scale CSEM method has good resolution, it can be applied in the area of geophysical exploration.

  17. Large scale structure simulations of inhomogeneous Lemaître-Tolman-Bondi void models

    Science.gov (United States)

    Alonso, David; García-Bellido, Juan; Haugbølle, Troels; Vicente, Julián

    2010-12-01

    We perform numerical simulations of large scale structure evolution in an inhomogeneous Lemaître-Tolman-Bondi (LTB) model of the Universe. We follow the gravitational collapse of a large underdense region (a void) in an otherwise flat matter-dominated Einstein-de Sitter model. We observe how the (background) density contrast at the center of the void grows to be of order one, and show that the density and velocity profiles follow the exact nonlinear LTB solution to the full Einstein equations for all but the most extreme voids. This result seems to contradict previous claims that fully relativistic codes are needed to properly handle the nonlinear evolution of large scale structures, and that local Newtonian dynamics with an explicit expansion term is not adequate. We also find that the (local) matter density contrast grows with the scale factor in a way analogous to that of an open universe with a value of the matter density ΩM(r) corresponding to the appropriate location within the void.

  18. Generative models of rich clubs in Hebbian neuronal networks and large-scale human brain networks.

    Science.gov (United States)

    Vértes, Petra E; Alexander-Bloch, Aaron; Bullmore, Edward T

    2014-10-05

    Rich clubs arise when nodes that are 'rich' in connections also form an elite, densely connected 'club'. In brain networks, rich clubs incur high physical connection costs but also appear to be especially valuable to brain function. However, little is known about the selection pressures that drive their formation. Here, we take two complementary approaches to this question: firstly we show, using generative modelling, that the emergence of rich clubs in large-scale human brain networks can be driven by an economic trade-off between connection costs and a second, competing topological term. Secondly we show, using simulated neural networks, that Hebbian learning rules also drive the emergence of rich clubs at the microscopic level, and that the prominence of these features increases with learning time. These results suggest that Hebbian learning may provide a neuronal mechanism for the selection of complex features such as rich clubs. The neural networks that we investigate are explicitly Hebbian, and we argue that the topological term in our model of large-scale brain connectivity may represent an analogous connection rule. This putative link between learning and rich clubs is also consistent with predictions that integrative aspects of brain network organization are especially important for adaptive behaviour.

  19. Robust linear equation dwell time model compatible with large scale discrete surface error matrix.

    Science.gov (United States)

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2015-04-01

    The linear equation dwell time model can translate the 2D convolution process of material removal during subaperture polishing into a more intuitional expression, and may provide relatively fast and reliable results. However, the accurate solution of this ill-posed equation is not so easy, and its practicability for a large scale surface error matrix is still limited. This study first solves this ill-posed equation by Tikhonov regularization and the least square QR decomposition (LSQR) method, and automatically determines an optional interval and a typical value for the damped factor of regularization, which are dependent on the peak removal rate of tool influence functions. Then, a constrained LSQR method is presented to increase the robustness of the damped factor, which can provide more consistent dwell time maps than traditional LSQR. Finally, a matrix segmentation and stitching method is used to cope with large scale surface error matrices. Using these proposed methods, the linear equation model becomes more reliable and efficient in practical engineering.

  20. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    CERN Document Server

    Reyes, Luz Marina; Aguilar, José Edgar Madriz; Bellini, Mauricio

    2012-01-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during an early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de-Sitter metric, which is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that is possible to derive dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity- antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  1. Estimation of Large-Scale Implicit Models Using 2-Stage Methods

    Directory of Open Access Journals (Sweden)

    Rolf Henriksen

    1985-01-01

    Full Text Available The problem of estimating large scale implicit (non-recursive models by two- stage methods is considered. The first stage of the methods is used to construct or estimate an explicit form of the total model, by constructing a minimal stochastic realization of the system. This model is then subsequently used in the second stage to generate instrumental variables for the purpose of estimating each sub-model separately. This latter stage can be carried out by utilizing a generalized least squares method, but most emphasis is put on utilizing decentralized filtering algorithms and a prediction error formulation. A note about the connection between the original TSLS-method (two-stage least squares method and stochastic realization is also made.

  2. Large scale inference in the Infinite Relational Model: Gibbs sampling is not enough

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon; Moth, Andreas Leon Aagard; Mørup, Morten

    2013-01-01

    The stochastic block-model and its non-parametric extension, the Infinite Relational Model (IRM), have become key tools for discovering group-structure in complex networks. Identifying these groups is a combinatorial inference problem which is usually solved by Gibbs sampling. However, whether...... Gibbs sampling suffices and can be scaled to the modeling of large scale real world complex networks has not been examined sufficiently. In this paper we evaluate the performance and mixing ability of Gibbs sampling in the Infinite Relational Model (IRM) by implementing a high performance Gibbs sampler....... We find that Gibbs sampling can be computationally scaled to handle millions of nodes and billions of links. Investigating the behavior of the Gibbs sampler for different sizes of networks we find that the mixing ability decreases drastically with the network size, clearly indicating a need...

  3. Large-scale Monte Carlo simulations for the depinning transition in Ising-type lattice models

    Science.gov (United States)

    Si, Lisha; Liao, Xiaoyun; Zhou, Nengji

    2016-12-01

    With the developed "extended Monte Carlo" (EMC) algorithm, we have studied the depinning transition in Ising-type lattice models by extensive numerical simulations, taking the random-field Ising model with a driving field and the driven bond-diluted Ising model as examples. In comparison with the usual Monte Carlo method, the EMC algorithm exhibits greater efficiency of the simulations. Based on the short-time dynamic scaling form, both the transition field and critical exponents of the depinning transition are determined accurately via the large-scale simulations with the lattice size up to L = 8912, significantly refining the results in earlier literature. In the strong-disorder regime, a new universality class of the Ising-type lattice model is unveiled with the exponents β = 0.304(5) , ν = 1.32(3) , z = 1.12(1) , and ζ = 0.90(1) , quite different from that of the quenched Edwards-Wilkinson equation.

  4. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, Christian [Monash Univ., Melbourne, VIC (Australia)

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  5. From Principles to Details: Integrated Framework for Architecture Modelling of Large Scale Software Systems

    Directory of Open Access Journals (Sweden)

    Andrzej Zalewski

    2013-06-01

    Full Text Available There exist numerous models of software architecture (box models, ADL’s, UML, architectural decisions, architecture modelling frameworks (views, enterprise architecture frameworks and even standards recommending practice for the architectural description. We show in this paper, that there is still a gap between these rather abstract frameworks/standards and existing architecture models. Frameworks and standards define what should be modelled rather than which models should be used and how these models are related to each other. We intend to prove that a less abstract modelling framework is needed for the effective modelling of large scale software intensive systems. It should provide a more precise guidance kinds of models to be employed and how they should relate to each other. The paper defines principles that can serve as base for an integrated model. Finally, structure of such a model has been proposed. It comprises three layers: the upper one – architectural policy – reflects corporate policy and strategies in architectural terms, the middle one –system organisation pattern – represents the core structural concepts and their rationale at a given level of scope, the lower one contains detailed architecture models. Architectural decisions play an important role here: they model the core architectural concepts explaining detailed models as well as organise the entire integrated model and the relations between its submodels.

  6. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  7. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  8. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  9. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.;

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...... mete- orological data input and taking in account the characteristics of different plants technologies and spatial distribution. An evalu- ation of the hourly forecasted energy production on a regional scale would be very valuable for the transmission system operators when making the long-term planning...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity....

  10. Real-world-time simulation of memory consolidation in a large-scale cerebellar model

    Directory of Open Access Journals (Sweden)

    Masato eGosui

    2016-03-01

    Full Text Available We report development of a large-scale spiking network model of thecerebellum composed of more than 1 million neurons. The model isimplemented on graphics processing units (GPUs, which are dedicatedhardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation ofcerebellar activity for 1 sec completes within 1 sec in thereal-world time, with temporal resolution of 1 msec.This allows us to carry out a very long-term computer simulationof cerebellar activity in a practical time with millisecond temporalresolution. Using the model, we carry out computer simulationof long-term gain adaptation of optokinetic response (OKR eye movementsfor 5 days aimed to study the neural mechanisms of posttraining memoryconsolidation. The simulation results are consistent with animal experimentsand our theory of posttraining memory consolidation. These resultssuggest that realtime computing provides a useful means to studya very slow neural process such as memory consolidation in the brain.

  11. Large Scale Structure Formation of normal branch in DGP brane world model

    CERN Document Server

    Song, Yong-Seon

    2007-01-01

    In this paper, we study the large scale structure formation of the normal branch in DGP model (Dvail, Gabadadze and Porrati brane world model) by applying the scaling method developed by Sawicki, Song and Hu for solving the coupled perturbed equations of motion of on-brane and off-brane. There is detectable departure of perturbed gravitational potential from LCDM even at the minimal deviation of the effective equation of state w_eff below -1. The modified perturbed gravitational potential weakens the integrated Sachs-Wolfe effect which is strengthened in the self-accelerating branch DGP model. Additionally, we discuss the validity of the scaling solution in the de Sitter limit at late times.

  12. Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shuai Li

    2008-03-01

    Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  13. Burnout of pulverized biomass particles in large scale boiler - Single particle model approach

    Energy Technology Data Exchange (ETDEWEB)

    Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero [VTT Technical Research Centre of Finland, Box 1603, 40101 Jyvaeskylae (Finland); Soerensen, Lasse Holst [ReaTech/ReAddit, Frederiksborgsveij 399, Niels Bohr, DK-4000 Roskilde (Denmark); Clausen, Soennik [Risoe National Laboratory, DK-4000 Roskilde (Denmark); Berg, Mogens [ENERGI E2 A/S, A.C. Meyers Vaenge 9, DK-2450 Copenhagen SV (Denmark)

    2010-05-15

    Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout. (author)

  14. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    CERN Document Server

    Gaite, Jose

    2013-01-01

    Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos, it is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  15. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  16. Large-scale shell-model calculations of nuclei around mass 210

    Science.gov (United States)

    Teruya, E.; Higashiyama, K.; Yoshinaga, N.

    2016-06-01

    Large-scale shell-model calculations are performed for even-even, odd-mass, and doubly odd nuclei of Pb, Bi, Po, At, Rn, and Fr isotopes in the neutron deficit region (Z ≥82 ,N ≤126 ) assuming 208Pb as a doubly magic core. All the six single-particle orbitals between the magic numbers 82 and 126, namely, 0 h9 /2,1 f7 /2,0 i13 /2,2 p3 /2,1 f5 /2 , and 2 p1 /2 , are considered. For a phenomenological effective two-body interaction, one set of the monopole pairing and quadrupole-quadrupole interactions including the multipole-pairing interactions is adopted for all the nuclei considered. The calculated energies and electromagnetic properties are compared with the experimental data. Furthermore, many isomeric states are analyzed in terms of the shell-model configurations.

  17. Large scale separation and resonances within LHC range from a prototype BSM model

    CERN Document Server

    Hasenfratz, Anna; Witzel, Oliver

    2016-01-01

    Many theories describing physics beyond the Standard Model rely on a large separation of scales. Large scale separation arises in models with mass-split flavors if the system is conformal in the ultraviolet but chirally broken in the infrared. Because of the conformal fixed point, these systems exhibit hyperscaling and a highly constrained resonance spectrum. We derive hyperscaling relations and investigate the realization of one such system with four light and eight heavy flavors. Our numerical simulations confirm that both light-light and heavy-heavy resonance masses show hyperscaling and depend only on the ratio of the light and heavy flavor masses. The heavy-heavy spectrum is qualitatively different from QCD and exhibits quarkonia with masses not proportional to the constituent quark mass. These resonances are only a few times heavier than the light-light ones, which would put them within reach of the LHC.

  18. A Data-driven Analytic Model for Proton Acceleration by Large-scale Solar Coronal Shocks

    Science.gov (United States)

    Kozarev, Kamen A.; Schwadron, Nathan A.

    2016-11-01

    We have recently studied the development of an eruptive filament-driven, large-scale off-limb coronal bright front (OCBF) in the low solar corona, using remote observations from the Solar Dynamics Observatory’s Advanced Imaging Assembly EUV telescopes. In that study, we obtained high-temporal resolution estimates of the OCBF parameters regulating the efficiency of charged particle acceleration within the theoretical framework of diffusive shock acceleration (DSA). These parameters include the time-dependent front size, speed, and strength, as well as the upstream coronal magnetic field orientations with respect to the front’s surface normal direction. Here we present an analytical particle acceleration model, specifically developed to incorporate the coronal shock/compressive front properties described above, derived from remote observations. We verify the model’s performance through a grid of idealized case runs using input parameters typical for large-scale coronal shocks, and demonstrate that the results approach the expected DSA steady-state behavior. We then apply the model to the event of 2011 May 11 using the OCBF time-dependent parameters derived by Kozarev et al. We find that the compressive front likely produced energetic particles as low as 1.3 solar radii in the corona. Comparing the modeled and observed fluences near Earth, we also find that the bulk of the acceleration during this event must have occurred above 1.5 solar radii. With this study we have taken a first step in using direct observations of shocks and compressions in the innermost corona to predict the onsets and intensities of solar energetic particle events.

  19. Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud

    Science.gov (United States)

    Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.

    2014-12-01

    In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to

  20. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    Science.gov (United States)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  1. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study.

    Science.gov (United States)

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-02-15

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method.

  2. Large-scale shell model study of the newly found isomer in 136La

    Science.gov (United States)

    Teruya, E.; Yoshinaga, N.; Higashiyama, K.; Nishibata, H.; Odahara, A.; Shimoda, T.

    2016-07-01

    The doubly-odd nucleus 136La is theoretically studied in terms of a large-scale shell model. The energy spectrum and transition rates are calculated and compared with the most updated experimental data. The isomerism is investigated for the first 14+ state, which was found to be an isomer in the previous study [Phys. Rev. C 91, 054305 (2015), 10.1103/PhysRevC.91.054305]. It is found that the 14+ state becomes an isomer due to a band crossing of two bands with completely different configurations. The yrast band with the (ν h11/2 -1⊗π h11 /2 ) configuration is investigated, revealing a staggering nature in M 1 transition rates.

  3. Large scale shell model calculations for even-even $^{62-66}$Fe isotopes

    CERN Document Server

    Srivastava, P C

    2009-01-01

    The recently measured experimental data of Legnaro National Laboratories on neutron rich even isotopes of $^{62-66}$Fe with A=62,64,66 have been interpreted in the framework of large scale shell model. Calculations have been performed with a newly derived effective interaction GXPF1A in full $\\it{fp}$ space without truncation. The experimental data is very well explained for $^{62}$Fe, satisfactorily reproduced for $^{64}$Fe and poorly fitted for $^{66}$Fe. The increasing collectivity reflected in experimental data when approaching N=40 is not reproduced in calculated values. This indicates that whereas the considered valence space is adequate for $^{62}$Fe, inclusion of higher orbits from $\\it{sdg}$ shell is required for describing $^{66}$Fe.

  4. Probabilistic SDG model description and fault inference for large-scale complex systems

    Institute of Scientific and Technical Information of China (English)

    Yang Fan; Xiao Deyun

    2006-01-01

    Large-scale complex systems have the feature of including large amount of variables that have complex relationships, for which signed directed graph (SDG) model could serve as a significant tool by describing the causal relationships among variables. Although qualitative SDG expresses the causing effects between variables easily and clearly, it has many disadvantages or limitations. Probabilistic SDG proposed in the article describes deliver relationships among faults and variables by conditional probabilities, which contains more information and performs more applicability. The article introduces the concepts and construction approaches of probabilistic SDG, and presents the inference approaches aiming at fault diagnosis in this framework, i.e. Bayesian inference with graph elimination or junction tree algorithms to compute fault probabilities. Finally, the probabilistic SDG of a typical example of 65t/h boiler system is given.

  5. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    Directory of Open Access Journals (Sweden)

    Jianda Han

    2016-02-01

    Full Text Available One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method.

  6. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  7. Adaptive rational block Arnoldi methods for model reductions in large-scale MIMO dynamical systems

    Directory of Open Access Journals (Sweden)

    Khalide Jbilou

    2016-04-01

    Full Text Available In recent years, a great interest has been shown towards Krylov subspace techniques applied to model order reduction of large-scale dynamical systems. A special interest has been devoted to single-input single-output (SISO systems by using moment matching techniques based on Arnoldi or Lanczos algorithms. In this paper, we consider multiple-input multiple-output (MIMO dynamical systems and introduce the rational block Arnoldi process to design low order dynamical systems that are close in some sense to the original MIMO dynamical system. Rational Krylov subspace methods are based on the choice of suitable shifts that are selected a priori or adaptively. In this paper, we propose an adaptive selection of those shifts and show the efficiency of this approach in our numerical tests. We also give some new block Arnoldi-like relations that are used to propose an upper bound for the norm of the error on the transfer function.

  8. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  9. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    Science.gov (United States)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for

  10. A toy model for the large-scale matter distribution in the Universe

    CERN Document Server

    Leigh, Nathan W C

    2016-01-01

    We consider a toy model for the large-scale matter distribution in a static Universe. The model assumes a mass spectrum dN$_{\\rm i}$/dm$_{\\rm i}$ $=$ $\\beta$m$_{\\rm i}^{-\\alpha}$ (where $\\alpha$ and $\\beta$ are both positive constants) for low-mass particles with m$_{\\rm i}$ $\\ll$ M$_{\\rm P}$, where M$_{\\rm P}$ is the Planck mass, and a particle mass-wavelength relation of the form $\\lambda_{\\rm i} =$ $\\hbar$/$\\delta_{\\rm i}$m$_{\\rm i}$c, where $\\delta_{\\rm i} =$ $\\eta$m$_{\\rm i}^{\\gamma}$ and $\\eta$ and $\\gamma$ are both constants. Our model mainly concerns particles with masses far below those in the Standard Model of Particle Physics. We assume that, for such low-mass particles, locality can only be defined on large spatial scales, comparable to or exceeding the particle wavelengths. We use our model to derive the cosmological redshift characteristic of the Standard Model of Cosmology, which becomes a gravitational redshift in our model. We compare the results of our model to empirical data and show that, ...

  11. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  12. Multi-variate spatial explicit constraining of a large scale hydrological model

    Science.gov (United States)

    Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    Increased availability and quality of near real-time data should target at better understanding of predictive skills of distributed hydrological models. Nevertheless, predictions of regional scale water fluxes and states remains of great challenge to the scientific community. Large scale hydrological models are used for prediction of soil moisture, evapotranspiration and other related water states and fluxes. They are usually properly constrained against river discharge, which is an integral variable. Rakovec et al (2016) recently demonstrated that constraining model parameters against river discharge is necessary, but not a sufficient condition. Therefore, we further aim at scrutinizing appropriate incorporation of readily available information into a hydrological model that may help to improve the realism of hydrological processes. It is important to analyze how complementary datasets besides observed streamflow and related signature measures can improve model skill of internal model variables during parameter estimation. Among those products suitable for further scrutiny are for example the GRACE satellite observations. Recent developments of using this dataset in a multivariate fashion to complement traditionally used streamflow data within the distributed model mHM (www.ufz.de/mhm) are presented. Study domain consists of 80 European basins, which cover a wide range of distinct physiographic and hydrologic regimes. First-order data quality check ensures that heavily human influenced basins are eliminated. For river discharge simulations we show that model performance of discharge remains unchanged when complemented by information from the GRACE product (both, daily and monthly time steps). Moreover, the GRACE complementary data lead to consistent and statistically significant improvements in evapotranspiration estimates, which are evaluated using an independent gridded FLUXNET product. We also show that the choice of the objective function used to estimate

  13. Coordinated reset stimulation in a large-scale model of the STN-GPe circuit

    Directory of Open Access Journals (Sweden)

    Martin eEbert

    2014-11-01

    Full Text Available Synchronization of populations of neurons is a hallmark of several brain diseases. Coordinated reset (CR stimulation is a model-based stimulation technique which specifically counteracts abnormal synchrony by desynchronization. Electrical CR stimulation, e.g. for the treatment of Parkinson’s disease (PD, is administered via depth electrodes. In order to get a deeper understanding of this technique, we extended the top-down approach of previous studies and constructed a large-scale computational model of the respective brain areas. Furthermore, we took into account the spatial anatomical properties of the simulated brain structures and incor- porated a detailed numerical representation of 2·104 simulated neurons. We simulated the subthalamic nucleus (STN and the globus pallidus externus (GPe. Connections within the STN were governed by spike-timing dependent plasticity (STDP. In this way, we modeled the physiological and pathological activity of the considered brain structures. In particular, we investigated how plasticity could be exploited and how the model could be shifted from strongly synchronized (pathological activity to strongly desynchronized (healthy activity of the neuronal populations via CR stimulation of the STN neurons. Furthermore, we investigated the impact of specific stimulation parameters especially the electrode position on the stimulation outcome. Our model provides a step forward towards a biophysically realistic model of the brain areas relevant to the emergence of pathological neuronal activity in PD. Furthermore, our model constitutes a test bench for the optimization of both stimulation parameters and novel electrode geometries for efficient CR stimulation.

  14. A modelling case study of a large-scale cirrus in the tropical tropopause layer

    Science.gov (United States)

    Podglajen, A.; Plougonven, R.; Hertzog, A.; Legras, B.

    2015-11-01

    We use the Weather Research and Forecast (WRF) model to simulate a large-scale tropical tropopause layer (TTL) cirrus, in order to understand the formation and life cycle of the cloud. This cirrus event has been previously described through satellite observations by Taylor et al. (2011). Comparisons of the simulated and observed cirrus show a fair agreement, and validate the reference simulation regarding cloud extension, location and life time. The validated simulation is used to understand the causes of cloud formation. It is shown that several cirrus clouds successively form in the region due to adiabatic cooling and large-scale uplift rather than from ice lofting from convective anvils. The equatorial response (equatorial wave excitation) to a midlatitude potential vorticity (PV) intrusion structures the uplift. Sensitivity tests are then performed to assess the relative importance of the choice of the microphysics parametrisation and of the initial and boundary conditions. The initial dynamical conditions (wind and temperature) essentially control the horizontal location and area of the cloud. On the other hand, the choice of the microphysics scheme influences the ice water content and the cloud vertical position. Last, the fair agreement with the observations allows to estimate the cloud impact in the TTL in the simulations. The cirrus clouds have a small but not negligible impact on the radiative budget of the local TTL. However, the cloud radiative heating does not significantly influence the simulated dynamics. The simulation also provides an estimate of the vertical redistribution of water by the cloud and the results emphasize the importance in our case of both re and dehydration in the vicinity of the cirrus.

  15. A modelling case study of a large-scale cirrus in the tropical tropopause layer

    Science.gov (United States)

    Podglajen, Aurélien; Plougonven, Riwal; Hertzog, Albert; Legras, Bernard

    2016-03-01

    We use the Weather Research and Forecast (WRF) model to simulate a large-scale tropical tropopause layer (TTL) cirrus in order to understand the formation and life cycle of the cloud. This cirrus event has been previously described through satellite observations by Taylor et al. (2011). Comparisons of the simulated and observed cirrus show a fair agreement and validate the reference simulation regarding cloud extension, location and life time. The validated simulation is used to understand the causes of cloud formation. It is shown that several cirrus clouds successively form in the region due to adiabatic cooling and large-scale uplift rather than from convective anvils. The structure of the uplift is tied to the equatorial response (equatorial wave excitation) to a potential vorticity intrusion from the midlatitudes. Sensitivity tests are then performed to assess the relative importance of the choice of the microphysics parameterization and of the initial and boundary conditions. The initial dynamical conditions (wind and temperature) essentially control the horizontal location and area of the cloud. However, the choice of the microphysics scheme influences the ice water content and the cloud vertical position. Last, the fair agreement with the observations allows to estimate the cloud impact in the TTL in the simulations. The cirrus clouds have a small but not negligible impact on the radiative budget of the local TTL. However, for this particular case, the cloud radiative heating does not significantly influence the simulated dynamics. This result is due to (1) the lifetime of air parcels in the cloud system, which is too short to significantly influence the dynamics, and (2) the fact that induced vertical motions would be comparable to or smaller than the typical mesoscale motions present. Finally, the simulation also provides an estimate of the vertical redistribution of water by the cloud and the results emphasize the importance in our case of both

  16. Some cases of machining large-scale parts: Characterization and modelling of heavy turning, deep drilling and broaching

    Science.gov (United States)

    Haddag, B.; Nouari, M.; Moufki, A.

    2016-10-01

    Machining large-scale parts involves extreme loading at the cutting zone. This paper presents an overview of some cases of machining large-scale parts: heavy turning, deep drilling and broaching processes. It focuses on experimental characterization and modelling methods of these processes. Observed phenomena and/or measured cutting forces are reported. The paper also discusses the predictive ability of the proposed models to reproduce experimental data.

  17. Non-intrusive Ensemble Kalman filtering for large scale geophysical models

    Science.gov (United States)

    Amour, Idrissa; Kauranne, Tuomo

    2016-04-01

    Advanced data assimilation techniques, such as variational assimilation methods, present often challenging implementation issues for large-scale models, both because of computational complexity and because of complexity of implementation. We present a non-intrusive wrapper library that addresses this problem by isolating the direct model and the linear algebra employed in data assimilation from each other completely. In this approach we have adopted a hybrid Variational Ensemble Kalman filter that combines Ensemble propagation with a 3DVAR analysis stage. The inverse problem of state and covariance propagation from prior to posterior estimates is thereby turned into a time-independent problem. This feature allows the linear algebra and minimization steps required in the variational step to be conducted outside the direct model and no tangent linear or adjoint codes are required. Communication between the model and the assimilation module is conducted exclusively via standard input and output files of the model. This non-intrusive approach is tested with the comprehensive 3D lake and shallow sea model COHERENS that is used to forecast and assimilate turbidity in lake Säkylän Pyhäjärvi in Finland, using both sparse satellite images and continuous real-time point measurements as observations.

  18. Structure-preserving model reduction of large-scale logistics networks. Applications for supply chains

    Science.gov (United States)

    Scholz-Reiter, B.; Wirth, F.; Dashkovskiy, S.; Makuschewitz, T.; Schönlein, M.; Kosmykov, M.

    2011-12-01

    We investigate the problem of model reduction with a view to large-scale logistics networks, specifically supply chains. Such networks are modeled by means of graphs, which describe the structure of material flow. An aim of the proposed model reduction procedure is to preserve important features within the network. As a new methodology we introduce the LogRank as a measure for the importance of locations, which is based on the structure of the flows within the network. We argue that these properties reflect relative importance of locations. Based on the LogRank we identify subgraphs of the network that can be neglected or aggregated. The effect of this is discussed for a few motifs. Using this approach we present a meta algorithm for structure-preserving model reduction that can be adapted to different mathematical modeling frameworks. The capabilities of the approach are demonstrated with a test case, where a logistics network is modeled as a Jackson network, i.e., a particular type of queueing network.

  19. Fermi Observations of Resolved Large-Scale Jets: Testing the IC/CMB Model

    Science.gov (United States)

    Breiding, Peter; Meyer, Eileen T.; Georganopoulos, Markos

    2017-01-01

    It has been observed with the Chandra X-ray Observatory since the early 2000s that many powerful quasar jets show X-ray emission on the kpc scale (Harris & Krawczynski, 2006). In many cases these X-rays cannot be explained by the extension of the radio-optical spectrum produced by synchrotron emitting electrons in the jet, since the observed X-ray flux is too high and the X-ray spectral index too hard. A widely accepted model for the X-ray emission first proposed by Celotti et al. 2001 and Tavecchio et al. 2000 posits that the X-rays are produced when relativistic electrons in the jet up-scatter ambient cosmic microwave background (CMB) photons via inverse Compton scattering from microwave to X-ray energies (the IC/CMB model). However, explaining the X-ray emission for these jets with the IC/CMB model requires high levels of IC/CMB γ-ray emission (Georganopoulos et al., 2006), which we are looking for using the FERMI/LAT γ-ray space telescope. Another viable model for the large scale jet X-ray emission favored by the results of Meyer et al. 2015 and Meyer & Georganopoulos 2014 is an alternate population of synchrotron emitting electrons. In contrast with the second synchrotron interpretation; the IC/CMB model requires jets with high kinetic powers which can exceed the Eddington luminsoity (Dermer & Atoyan 2004 and Atoyan & Dermer 2004) and be very fast on the kpc scale with a Γ~10 (Celotti et al. 2001 and Tavecchio et al. 2000). New results from data obtained with the Fermi/LAT will be shown for several quasars not in the Fermi/LAT 3FGL catalog whose large scale X-ray jets are attributed to IC/CMB. Additionally, recent work on the γ-ray bright blazar AP Librae will be shown which helps to constrain some models attempting to explain the high energy component of its SED, which extends from X-ray to TeV energies (e.g., Zacharias & Wagner 2016 & Petropoulou et al. 2016).

  20. Large-scale growth evolution in the Szekeres inhomogeneous cosmological models with comparison to growth data

    CERN Document Server

    Peel, Austin; Troxel, M A

    2012-01-01

    We use the Szekeres inhomogeneous cosmological models to study the growth of large-scale structure in the universe including nonzero spatial curvature and a cosmological constant. In particular, we use the Goode and Wainwright formulation, as in this form the models can be considered to represent exact nonlinear perturbations of an averaged background. We identify a density contrast in both classes I and II of the models, for which we derive growth evolution equations. By including Lambda, the time evolution of the density contrast as well as kinematic quantities can be tracked through the matter- and Lambda-dominated cosmic eras up to the present and into the future. In various models of class I and class II, the growth rate is found to be stronger than that of the LCDM cosmology, and it is suppressed at later times due to the presence of Lambda. We find that there are Szekeres models able to provide a growth history similar to that of LCDM while requiring less matter content and nonzero curvature, which spe...

  1. Ground observation and AMIE-TIEGCM modeling of a large-scale traveling ionospheric disturbance

    Science.gov (United States)

    Shiokawa, K.; Lu, G.; Nishitani, N.; Sato, N.

    We show comparison of ground observation and modeling of a prominent large-scale traveling ionospheric disturbance (LSTID) observed in Japan during the major magnetic storm of March 31, 2001 (Shiokawa et al., JGR, 2003). The LSTID was detected as an enhancement of the 630-nm airglow intensity, an enhancement of GPS-TEC, a decrease of F-layer virtual height, and an increase of foF2. They moved equatorward with a velocity of 400-500 m/s. These results suggest that an enhancement of poleward neutral wind (propagating equatorward as a traveling atmospheric wave) caused the observed ionospheric features of the LSTID. The ion drift measurement by the MU radar and Doppler wind measurement by a Fabry-Perot interferometer (630-nm and 558-nm airglow) at Shigaraki actually showed poleward wind enhancement during the LSTID event. To model this LSTID event, we used the assimilative mapping of ionospheric electrodynamics (AMIE) technique as inputs to the thermosphere-ionosphere-electrodynamics general circulation model (TIEGCM). The model shows fine structures of the poleward wind enhancement both propagated from the auroral zone and generated directly at midlatitudes.

  2. Inclusive Constraints on Unified Dark Matter Models from Future Large-Scale Surveys

    CERN Document Server

    Camera, Stefano; Moscardini, Lauro

    2012-01-01

    In the very last years, cosmological models where the properties of the dark components of the Universe - dark matter and dark energy - are accounted for by a single "dark fluid" have drawn increasing attention and interest. Amongst many proposals, Unified Dark Matter (UDM) cosmologies are promising candidates as effective theories. In these models, a scalar field with a non-canonical kinetic term in its Lagrangian mimics both the accelerated expansion of the Universe at late times and the clustering properties of the large-scale structure of the cosmos. However, UDM models also present peculiar behaviours, the most interesting one being the fact that the perturbations in the dark-matter component of the scalar field do have a non-negligible speed of sound. This gives rise to an effective Jeans scale for the Newtonian potential, below which the dark fluid does not cluster any more. This implies a growth of structures fairly different from that of the concordance LCDM model. In this paper, we demonstrate that ...

  3. A stabilized proper orthogonal decomposition reduced-order model for large scale quasigeostrophic ocean circulation

    CERN Document Server

    San, Omer

    2014-01-01

    In this paper, a stabilized proper orthogonal decomposition (POD) reduced-order model (ROM) is presented for the barotropic vorticity equation. We apply the POD-ROM model to mid-latitude simplified oceanic basins, which are standard prototypes of more realistic large-scale ocean dynamics. A mode dependent eddy viscosity closure scheme is used to model the effects of the discarded POD modes. A sensitivity analysis with respect to the free eddy viscosity stabilization parameter is performed for various POD-ROMs with different numbers of POD modes. The POD-ROM results are validated against the Munk layer resolving direct numerical simulations using a fully conservative fourth-order Arakawa scheme. A comparison with the standard Galerkin POD-ROM without any stabilization is also included in our investigation. Significant improvements in the accuracy over the standard Galerkin model are shown for a four-gyre ocean circulation problem. This first step in the numerical assessment of the POD-ROM shows that it could r...

  4. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    Energy Technology Data Exchange (ETDEWEB)

    Reyes, Luz M., E-mail: luzmarinareyes@gmail.com [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Moreno, Claudia, E-mail: claudia.moreno@cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Madriz Aguilar, Jose Edgar, E-mail: edgar.madriz@red.cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Bellini, Mauricio, E-mail: mbellini@mdp.edu.ar [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata (UNMdP), Funes 3350, C.P. 7600, Mar del Plata (Argentina); Instituto de Investigaciones Fisicas de Mar del Plata (IFIMAR) - Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)

    2012-10-22

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  5. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    Science.gov (United States)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  6. Large-scale shell-model calculations on the spectroscopy of $N<126$ Pb isotopes

    CERN Document Server

    Qi, Chong; Fu, G J

    2016-01-01

    Large-scale shell-model calculations are carried out in the model space including neutron-hole orbitals $2p_{1/2}$, $1f_{5/2}$, $2p_{3/2}$, $0i_{13/2}$, $1f_{7/2}$ and $0h_{9/2}$ to study the structure and electromagnetic properties of neutron deficient Pb isotopes. An optimized effective interaction is used. Good agreement between full shell-model calculations and experimental data is obtained for the spherical states in isotopes $^{194-206}$Pb. The lighter isotopes are calculated with an importance-truncation approach constructed based on the monopole Hamiltonian. The full shell-model results also agree well with our generalized seniority and nucleon-pair-approximation truncation calculations. The deviations between theory and experiment concerning the excitation energies and electromagnetic properties of low-lying $0^+$ and $2^+$ excited states and isomeric states may provide a constraint on our understanding of nuclear deformation and intruder configuration in this region.

  7. Morphotectonic evolution of passive margins undergoing active surface processes: large-scale experiments using numerical models.

    Science.gov (United States)

    Beucher, Romain; Huismans, Ritske S.

    2016-04-01

    Extension of the continental lithosphere can lead to the formation of a wide range of rifted margins styles with contrasting tectonic and geomorphological characteristics. It is now understood that many of these characteristics depend on the manner extension is distributed depending on (among others factors) rheology, structural inheritance, thermal structure and surface processes. The relative importance and the possible interactions of these controlling factors is still largely unknown. Here we investigate the feedbacks between tectonics and the transfers of material at the surface resulting from erosion, transport, and sedimentation. We use large-scale (1200 x 600 km) and high-resolution (~1km) numerical experiments coupling a 2D upper-mantle-scale thermo-mechanical model with a plan-form 2D surface processes model (SPM). We test the sensitivity of the coupled models to varying crust-lithosphere rheology and erosional efficiency ranging from no-erosion to very efficient erosion. We discuss how fast, when and how the topography of the continents evolves and how it can be compared to actual passive margins escarpment morphologies. We show that although tectonics is the main factor controlling the rift geometry, transfers of masses at the surface affect the timing of faulting and the initiation of sea-floor spreading. We discuss how such models may help to understand the evolution of high-elevated passive margins around the world.

  8. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates

    Science.gov (United States)

    Barberis, Lucas; Peruani, Fernando

    2016-12-01

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  9. Statistical Modeling of Large-Scale Signal Path Loss in Underwater Acoustic Networks

    Directory of Open Access Journals (Sweden)

    Manuel Perez Malumbres

    2013-02-01

    Full Text Available In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation, we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc., an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc..

  10. A large-scale stochastic spatiotemporal model for Aedes albopictus-borne chikungunya epidemiology

    Science.gov (United States)

    Chandra, Nastassya L.; Proestos, Yiannis; Lelieveld, Jos; Christophides, George K.; Parham, Paul E.

    2017-01-01

    Chikungunya is a viral disease transmitted to humans primarily via the bites of infected Aedes mosquitoes. The virus caused a major epidemic in the Indian Ocean in 2004, affecting millions of inhabitants, while cases have also been observed in Europe since 2007. We developed a stochastic spatiotemporal model of Aedes albopictus-borne chikungunya transmission based on our recently developed environmentally-driven vector population dynamics model. We designed an integrated modelling framework incorporating large-scale gridded climate datasets to investigate disease outbreaks on Reunion Island and in Italy. We performed Bayesian parameter inference on the surveillance data, and investigated the validity and applicability of the underlying biological assumptions. The model successfully represents the outbreak and measures of containment in Italy, suggesting wider applicability in Europe. In its current configuration, the model implies two different viral strains, thus two different outbreaks, for the two-stage Reunion Island epidemic. Characterisation of the posterior distributions indicates a possible relationship between the second larger outbreak on Reunion Island and the Italian outbreak. The model suggests that vector control measures, with different modes of operation, are most effective when applied in combination: adult vector intervention has a high impact but is short-lived, larval intervention has a low impact but is long-lasting, and quarantining infected territories, if applied strictly, is effective in preventing large epidemics. We present a novel approach in analysing chikungunya outbreaks globally using a single environmentally-driven mathematical model. Our study represents a significant step towards developing a globally applicable Ae. albopictus-borne chikungunya transmission model, and introduces a guideline for extending such models to other vector-borne diseases. PMID:28362820

  11. Sensitivity and foreground modelling for large-scale CMB B-mode polarization satellite missions

    CERN Document Server

    Remazeilles, M; Eriksen, H K K; Wehus, I K

    2015-01-01

    Measurements of large-scale B-mode polarization in the cosmic microwave background (CMB) are a fundamental goal of current and future CMB experiments. However, because of the much higher instrumental sensitivity, CMB experiments will be more sensitive to any imperfect modelling of the Galactic foreground polarization in the estimation of the primordial B-mode signal. We compare the sensitivity to B-modes for different concepts of CMB satellite missions (LiteBIRD, COrE, COrE+, PRISM, EPIC, PIXIE) in the presence of Galactic foregrounds that are either correctly or incorrectly modelled. We quantify the impact on the tensor-to-scalar parameter of imperfect foreground modelling in the component separation process. Using Bayesian parametric fitting and Gibbs sampling, we perform the separation of the CMB and the Galactic foreground B-mode polarization. The resulting CMB B-mode power spectrum is used to compute the likelihood distribution of the tensor-to-scalar ratio. We focus the analysis to the very large angula...

  12. Development of explosive event scale model testing capability at Sandia`s large scale centrifuge facility

    Energy Technology Data Exchange (ETDEWEB)

    Blanchat, T.K.; Davie, N.T.; Calderone, J.J. [and others

    1998-02-01

    Geotechnical structures such as underground bunkers, tunnels, and building foundations are subjected to stress fields produced by the gravity load on the structure and/or any overlying strata. These stress fields may be reproduced on a scaled model of the structure by proportionally increasing the gravity field through the use of a centrifuge. This technology can then be used to assess the vulnerability of various geotechnical structures to explosive loading. Applications of this technology include assessing the effectiveness of earth penetrating weapons, evaluating the vulnerability of various structures, counter-terrorism, and model validation. This document describes the development of expertise in scale model explosive testing on geotechnical structures using Sandia`s large scale centrifuge facility. This study focused on buried structures such as hardened storage bunkers or tunnels. Data from this study was used to evaluate the predictive capabilities of existing hydrocodes and structural dynamics codes developed at Sandia National Laboratories (such as Pronto/SPH, Pronto/CTH, and ALEGRA). 7 refs., 50 figs., 8 tabs.

  13. Aerodynamic characteristics of a large-scale hybrid upper surface blown flap model having four engines

    Science.gov (United States)

    Carros, R. J.; Boissevain, A. G.; Aoyagi, K.

    1975-01-01

    Data are presented from an investigation of the aerodynamic characteristics of large-scale wind tunnel aircraft model that utilized a hybrid-upper surface blown flap to augment lift. The hybrid concept of this investigation used a portion of the turbofan exhaust air for blowing over the trailing edge flap to provide boundary layer control. The model, tested in the Ames 40- by 80-foot Wind Tunnel, had a 27.5 deg swept wing of aspect ratio 8 and 4 turbofan engines mounted on the upper surface of the wing. The lift of the model was augmented by turbofan exhaust impingement on the wind upper-surface and flap system. Results were obtained for three flap deflections, for some variation of engine nozzle configuration and for jet thrust coefficients from 0 to 3.0. Six-component longitudinal and lateral data are presented with four engine operation and with the critical engine out. In addition, a limited number of cross-plots of the data are presented. All of the tests were made with a downwash rake installed instead of a horizontal tail. Some of these downwash data are also presented.

  14. An Efficient Coarse Grid Projection Method for Quasigeostrophic Models of Large-Scale Ocean Circulation

    CERN Document Server

    San, Omer

    2013-01-01

    This paper puts forth a coarse grid projection (CGP) multiscale method to accelerate computations of quasigeostrophic (QG) models for large scale ocean circulation. These models require solving an elliptic sub-problem at each time step, which takes the bulk of the computational time. The method we propose here is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for solving the elliptic sub-problem and potential vorticity equations in the QG flow solvers. After solving the elliptic sub-problem on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. The potential vorticity field is then updated on the fine grid with savings in computational time due to the reduced number of grid points for the elliptic solver. The method is applied to both single layer barotropic and two-layer stratified QG ocean models for mid-latitude oceanic basins in the beta plane, which are standard prototypes of more...

  15. Revisiting the EC/CMB model for extragalactic large scale jets

    CERN Document Server

    Lucchini, Matteo; Ghisellini, Gabriele

    2016-01-01

    One of the most outstanding results of the Chandra X-ray Observatory was the discovery that AGN jets are bright X-ray emitters on very large scales, up to hundreds of kpc. Of these, the powerful and beamed jets of Flat Spectrum Radio Quasars are particularly interesting, as the X-ray emission cannot be explained by an extrapolation of the lower frequency synchrotron spectrum. Instead, the most common model invokes inverse Compton scattering of photons of the Cosmic Microwave Background (EC/CMB) as the mechanism responsible for the high energy emission. The EC/CMB model has recently come under criticism, particularly because it should predict a significant steady flux in the MeV-GeV band which has not been detected by the Fermi/LAT telescope for two of the best studied jets (PKS 0637-752 and 3C273). In this work we revisit some aspects of the EC/CMB model and show that electron cooling plays an important part in shaping the spectrum. This can solve the overproduction of gamma-rays by suppressing the high energ...

  16. Large-scale integrated model is useful for understanding heart mechanisms and developments of medical therapy.

    Science.gov (United States)

    Washio, Takumi; Okada, Jun-ichi; Sugiura, Seiryo; Hisada, Toshiaki

    2009-01-01

    In this paper, we discuss the need for a large-scale integrated computer heart model to understand cardiac pathophysiology and to assist in the development of novel treatments through our experiences with the "UT-heart" simulator. The UT-heart simulator is a multi-scale, multi-physics heart simulator that integrates and visualizes our knowledge of cardiac function in various aspects and scales. To demonstrate the usefulness of this model, we focus especially on two problems in cardiac anatomy and physiology. In the first application, the mechanistic implication of complex fiber and laminar structures is analyzed with respect to optimality of pumping performance. In the second application, the coronary circulation is analyzed, to identify factors that determine the behavior of the microcirculatory system. These two examples indicate not only the importance of the integration technique, but also the need to resolve structural complexities of the heart in the modeling. This leads naturally to incorporating high performance computing in medical therapy.

  17. Real-World-Time Simulation of Memory Consolidation in a Large-Scale Cerebellar Model.

    Science.gov (United States)

    Gosui, Masato; Yamazaki, Tadashi

    2016-01-01

    We report development of a large-scale spiking network model of the cerebellum composed of more than 1 million neurons. The model is implemented on graphics processing units (GPUs), which are dedicated hardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation of cerebellar activity for 1 s completes within 1 s in the real-world time, with temporal resolution of 1 ms. This allows us to carry out a very long-term computer simulation of cerebellar activity in a practical time with millisecond temporal resolution. Using the model, we carry out computer simulation of long-term gain adaptation of optokinetic response (OKR) eye movements for 5 days aimed to study the neural mechanisms of posttraining memory consolidation. The simulation results are consistent with animal experiments and our theory of posttraining memory consolidation. These results suggest that realtime computing provides a useful means to study a very slow neural process such as memory consolidation in the brain.

  18. Large-scale functional models of visual cortex for remote sensing

    Energy Technology Data Exchange (ETDEWEB)

    Brumby, Steven P [Los Alamos National Laboratory; Kenyon, Garrett [Los Alamos National Laboratory; Rasmussen, Craig E [Los Alamos National Laboratory; Swaminarayan, Sriram [Los Alamos National Laboratory; Bettencourt, Luis [Los Alamos National Laboratory; Landecker, Will [PORTLAND STATE UNIV.

    2009-01-01

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.

  19. Modeling long-term, large-scale sediment storage using a simple sediment budget approach

    Science.gov (United States)

    Naipal, Victoria; Reick, Christian; Van Oost, Kristof; Hoffmann, Thomas; Pongratz, Julia

    2016-05-01

    Currently, the anthropogenic perturbation of the biogeochemical cycles remains unquantified due to the poor representation of lateral fluxes of carbon and nutrients in Earth system models (ESMs). This lateral transport of carbon and nutrients between terrestrial ecosystems is strongly affected by accelerated soil erosion rates. However, the quantification of global soil erosion by rainfall and runoff, and the resulting redistribution is missing. This study aims at developing new tools and methods to estimate global soil erosion and redistribution by presenting and evaluating a new large-scale coarse-resolution sediment budget model that is compatible with ESMs. This model can simulate spatial patterns and long-term trends of soil redistribution in floodplains and on hillslopes, resulting from external forces such as climate and land use change. We applied the model to the Rhine catchment using climate and land cover data from the Max Planck Institute Earth System Model (MPI-ESM) for the last millennium (here AD 850-2005). Validation is done using observed Holocene sediment storage data and observed scaling between sediment storage and catchment area. We find that the model reproduces the spatial distribution of floodplain sediment storage and the scaling behavior for floodplains and hillslopes as found in observations. After analyzing the dependence of the scaling behavior on the main parameters of the model, we argue that the scaling is an emergent feature of the model and mainly dependent on the underlying topography. Furthermore, we find that land use change is the main contributor to the change in sediment storage in the Rhine catchment during the last millennium. Land use change also explains most of the temporal variability in sediment storage in floodplains and on hillslopes.

  20. QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.

    Science.gov (United States)

    Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V

    2015-07-27

    Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.

  1. Large-scale model-based assessment of deer-vehicle collision risk.

    Science.gov (United States)

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open

  2. Large-scale modeling of condition-specific gene regulatory networks by information integration and inference.

    Science.gov (United States)

    Ellwanger, Daniel Christian; Leonhardt, Jörn Florian; Mewes, Hans-Werner

    2014-12-01

    Understanding how regulatory networks globally coordinate the response of a cell to changing conditions, such as perturbations by shifting environments, is an elementary challenge in systems biology which has yet to be met. Genome-wide gene expression measurements are high dimensional as these are reflecting the condition-specific interplay of thousands of cellular components. The integration of prior biological knowledge into the modeling process of systems-wide gene regulation enables the large-scale interpretation of gene expression signals in the context of known regulatory relations. We developed COGERE (http://mips.helmholtz-muenchen.de/cogere), a method for the inference of condition-specific gene regulatory networks in human and mouse. We integrated existing knowledge of regulatory interactions from multiple sources to a comprehensive model of prior information. COGERE infers condition-specific regulation by evaluating the mutual dependency between regulator (transcription factor or miRNA) and target gene expression using prior information. This dependency is scored by the non-parametric, nonlinear correlation coefficient η(2) (eta squared) that is derived by a two-way analysis of variance. We show that COGERE significantly outperforms alternative methods in predicting condition-specific gene regulatory networks on simulated data sets. Furthermore, by inferring the cancer-specific gene regulatory network from the NCI-60 expression study, we demonstrate the utility of COGERE to promote hypothesis-driven clinical research.

  3. Prospective large-scale field study generates predictive model identifying major contributors to colony losses.

    Directory of Open Access Journals (Sweden)

    Merav Gleit Kielmanowicz

    2015-04-01

    Full Text Available Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees. The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health.

  4. Modeling Student Motivation and Students’ Ability Estimates From a Large-Scale Assessment of Mathematics

    Directory of Open Access Journals (Sweden)

    Carlos Zerpa

    2011-09-01

    Full Text Available When large-scale assessments (LSA do not hold personal stakes for students, students may not put forth their best effort. Low-effort examinee behaviors (e.g., guessing, omitting items result in an underestimate of examinee abilities, which is a concern when using results of LSA to inform educational policy and planning. The purpose of this study was to explore the relationship between examinee motivation as defined by expectancy-value theory, student effort, and examinee mathematics abilities. A principal components analysis was used to examine the data from Grade 9 students (n = 43,562 who responded to a self-report questionnaire on their attitudes and practices related to mathematics. The results suggested a two-component model where the components were interpreted as task-values in mathematics and student effort. Next, a hierarchical linear model was implemented to examine the relationship between examinee component scores and their estimated ability on a LSA. The results of this study provide evidence that motivation, as defined by the expectancy-value theory and student effort, partially explains student ability estimates and may have implications in the information that get transferred to testing organizations, school boards, and teachers while assessing students’ Grade 9 mathematics learning.

  5. Large-scale modelling of the divergent spectrin repeats in nesprins: giant modular proteins.

    Science.gov (United States)

    Autore, Flavia; Pfuhl, Mark; Quan, Xueping; Williams, Aisling; Roberts, Roland G; Shanahan, Catherine M; Fraternali, Franca

    2013-01-01

    Nesprin-1 and nesprin-2 are nuclear envelope (NE) proteins characterized by a common structure of an SR (spectrin repeat) rod domain and a C-terminal transmembrane KASH [Klarsicht-ANC-Syne-homology] domain and display N-terminal actin-binding CH (calponin homology) domains. Mutations in these proteins have been described in Emery-Dreifuss muscular dystrophy and attributed to disruptions of interactions at the NE with nesprins binding partners, lamin A/C and emerin. Evolutionary analysis of the rod domains of the nesprins has shown that they are almost entirely composed of unbroken SR-like structures. We present a bioinformatical approach to accurate definition of the boundaries of each SR by comparison with canonical SR structures, allowing for a large-scale homology modelling of the 74 nesprin-1 and 56 nesprin-2 SRs. The exposed and evolutionary conserved residues identify important pbs for protein-protein interactions that can guide tailored binding experiments. Most importantly, the bioinformatics analyses and the 3D models have been central to the design of selected constructs for protein expression. 1D NMR and CD spectra have been performed of the expressed SRs, showing a folded, stable, high content α-helical structure, typical of SRs. Molecular Dynamics simulations have been performed to study the structural and elastic properties of consecutive SRs, revealing insights in the mechanical properties adopted by these modules in the cell.

  6. On Renormalizing Viscous Fluids as Models for Large Scale Structure Formation

    CERN Document Server

    Führer, Florian

    2015-01-01

    We consider renormalization of the Adhesion Model for cosmic structure formation. This is a simple model that shares many relevant features of recent approaches which add effective viscosity and noise terms to the fluid equations of Cold Dark Matter, offering itself as a pedagogical playground to study the removal of the cutoff dependence from loop integrals. We show in this context that if the viscosity and noise terms are treated as perturbative corrections to the standard eulerian perturbation theory, as is done for example in the Effective Field Theory of Large Scale Structure (EFToLSS) approach, they are necessarily non-local in time. To ensure Galilean Invariance higher order vertices related to the viscosity and the noise must be added. We explicitly show at one-loop that these terms act as counter terms for vertex diagrams, while the Ward Identities ensure that the non-local theory can be renormalized consistently. A local-in-time theory is renormalizable if the viscosity is included in the linear pro...

  7. A Fractal Model for the Shear Behaviour of Large-Scale Opened Rock Joints

    Science.gov (United States)

    Li, Y.; Oh, J.; Mitra, R.; Canbulat, I.

    2017-01-01

    This paper presents a joint constitutive model that represents the shear behaviour of a large-scale opened rock joint. Evaluation of the degree of opening is made by considering the ratio between the joint wall aperture and the joint amplitude. Scale dependence of the surface roughness is investigated by approximating a natural joint profile to a fractal curve patterned in self-affinity. Developed scaling laws show the slopes of critical waviness and critical unevenness tend to flatten with increased sampling length. Geometrical examination of four 400-mm joint profiles agrees well with the suggested formulations involving multi-order asperities and fractal descriptors. Additionally, a fractal-based formulation is proposed to estimate the peak shear displacements of rock joints at varying scales, which shows a good correlation with experimental data taken from the literature. Parameters involved in the constitutive law can be acquired by inspecting roughness features of sampled rock joints. Thus, the model can be implemented in numerical software for the stability analysis of the rock mass with opened joints.

  8. Query Large Scale Microarray Compendium Datasets Using a Model-Based Bayesian Approach with Variable Selection

    Science.gov (United States)

    Hu, Ming; Qin, Zhaohui S.

    2009-01-01

    In microarray gene expression data analysis, it is often of interest to identify genes that share similar expression profiles with a particular gene such as a key regulatory protein. Multiple studies have been conducted using various correlation measures to identify co-expressed genes. While working well for small datasets, the heterogeneity introduced from increased sample size inevitably reduces the sensitivity and specificity of these approaches. This is because most co-expression relationships do not extend to all experimental conditions. With the rapid increase in the size of microarray datasets, identifying functionally related genes from large and diverse microarray gene expression datasets is a key challenge. We develop a model-based gene expression query algorithm built under the Bayesian model selection framework. It is capable of detecting co-expression profiles under a subset of samples/experimental conditions. In addition, it allows linearly transformed expression patterns to be recognized and is robust against sporadic outliers in the data. Both features are critically important for increasing the power of identifying co-expressed genes in large scale gene expression datasets. Our simulation studies suggest that this method outperforms existing correlation coefficients or mutual information-based query tools. When we apply this new method to the Escherichia coli microarray compendium data, it identifies a majority of known regulons as well as novel potential target genes of numerous key transcription factors. PMID:19214232

  9. A novel CPU/GPU simulation environment for large-scale biologically realistic neural modeling.

    Science.gov (United States)

    Hoang, Roger V; Tanna, Devyani; Jayet Bray, Laurence C; Dascalu, Sergiu M; Harris, Frederick C

    2013-01-01

    Computational Neuroscience is an emerging field that provides unique opportunities to study complex brain structures through realistic neural simulations. However, as biological details are added to models, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs) are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they have shown significant improvement in execution time compared to Central Processing Units (CPUs). Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks, the NeoCortical Simulator version 6 (NCS6). NCS6 is a free, open-source, parallelizable, and scalable simulator, designed to run on clusters of multiple machines, potentially with high performance computing devices in each of them. It has built-in leaky-integrate-and-fire (LIF) and Izhikevich (IZH) neuron models, but users also have the capability to design their own plug-in interface for different neuron types as desired. NCS6 is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing data across eight machines with each having two video cards.

  10. The Density Matrix Renormalization Group Method and Large-Scale Nuclear Shell-Model Calculations

    CERN Document Server

    Dimitrova, S S; Pittel, S; Stoitsov, M V

    2002-01-01

    The particle-hole Density Matrix Renormalization Group (p-h DMRG) method is discussed as a possible new approach to large-scale nuclear shell-model calculations. Following a general description of the method, we apply it to a class of problems involving many identical nucleons constrained to move in a single large j-shell and to interact via a pairing plus quadrupole interaction. A single-particle term that splits the shell into degenerate doublets is included so as to accommodate the physics of a Fermi surface in the problem. We apply the p-h DMRG method to this test problem for two $j$ values, one for which the shell model can be solved exactly and one for which the size of the hamiltonian is much too large for exact treatment. In the former case, the method is able to reproduce the exact results for the ground state energy, the energies of low-lying excited states, and other observables with extreme precision. In the latter case, the results exhibit rapid exponential convergence, suggesting the great promi...

  11. A Photo-Hadronic Model of the Large Scale Jet of PKS 0637-752

    CERN Document Server

    Kusunose, Masaaki

    2016-01-01

    Strong X-ray emission from large scale jets of radio loud quasars still remains an open problem. Models based on inverse Compton scattering off CMB photons by relativistically beamed jets have recently been ruled out, since Fermi LAT observations for 3C 273 and PKS 0637-752 give the upper limit far below the model prediction. Synchrotron emission from a separate electron population with multi-hundred TeV energies remains a possibility although its origin is not well known. We examine a photo-hadronic origin of such high energy electrons/positrons, assuming that protons are accelerated up to $10^{19}$ eV and produce electrons/positrons through Bethe-Heitler process and photo-pion production. These secondary electrons/positrons are injected at sufficiently high energies and produce X-rays and $\\gamma$-rays by synchrotron radiation without conflicting with the Fermi LAT upper limits. We find that the resultant spectrum well reproduces the X-ray observations from PKS 0637-752, if the proton power is at least $10^...

  12. AN IMPROVED PTV SYSTEM FOR LARGE-SCALE PHYSICAL RIVER MODEL

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    To measure the surface flow in a physical river model, an improved system of Large-Scale Particle Tracking Velocimetry (LSPTV) was proposed and the elements of the PTV system were described. Usually the tracer particles of a PTV system seeded on water surface tend to form conglomerates due to surface tension of water. In addition, they can not float on water surface when water flow is shallow. Ellipsoid particles were used to avoid the above problems. Another important issue is particle recognition. In order to eliminate the influence of noise, particles were recognized by the processing of multi-frame images. The kernel of the improved PTV system is the algorithm for particle tracking. A new 3-frame PTV algorithm was developed. The performance of this algorithm was compared with the conventional 4-frame PTV algorithm and 2-frame PTV algorithm by means of computer simulation using synthetically generated images. The results show that the new 3-frame PTV algorithm can recover more velocity vectors and have lower relative error. In addition, in order to attain the whole flow field from individual flow fields, the method of stitching individual flow fields by obvious marks was worked out. Then the improved PTV system was applied to the measurement of surface flow field in Model Yellow River and shows good performance.

  13. The Large Scale Bias of Dark Matter Halos: Numerical Calibration and Model Tests

    CERN Document Server

    Tinker, Jeremy L; Kravtsov, Andrey V; Klypin, Anatoly; Warren, Michael S; Yepes, Gustavo; Gottlober, Stefan

    2010-01-01

    We measure the clustering of dark matter halos in a large set of collisionless cosmological simulations of the flat LCDM cosmology. Halos are identified using the spherical overdensity algorithm, which finds the mass around isolated peaks in the density field such that the mean density is Delta times the background. We calibrate fitting functions for the large scale bias that are adaptable to any value of Delta we examine. We find a ~6% scatter about our best fit bias relation. Our fitting functions couple to the halo mass functions of Tinker et. al. (2008) such that bias of all dark matter is normalized to unity. We demonstrate that the bias of massive, rare halos is higher than that predicted in the modified ellipsoidal collapse model of Sheth, Mo, & Tormen (2001), and approaches the predictions of the spherical collapse model for the rarest halos. Halo bias results based on friends-of-friends halos identified with linking length 0.2 are systematically lower than for halos with the canonical Delta=200 o...

  14. Constraining Large-Scale Solar Magnetic Field Models with Optical Coronal Observations

    Science.gov (United States)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.

    2015-12-01

    Scientific success of the Solar Probe Plus (SPP) and Solar Orbiter (SO) missions will depend to a large extent on the accuracy of the available coronal magnetic field models describing the connectivity of plasma disturbances in the inner heliosphere with their source regions. We argue that ground based and satellite coronagraph images can provide robust geometric constraints for the next generation of improved coronal magnetic field extrapolation models. In contrast to the previously proposed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions located at significant radial distances from the solar surface. Details on the new feature detection algorithms will be presented. By applying the developed image processing methodology to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code presented in a companion talk by S.Jones at al. Tracing results are shown to be in a good qualitative agreement with the large-scalie configuration of the optical corona. Subsequent phases of the project and the related data products for SSP and SO missions as wwll as the supporting global heliospheric simulations will be discussed.

  15. Large-scale modelling of the divergent spectrin repeats in nesprins: giant modular proteins.

    Directory of Open Access Journals (Sweden)

    Flavia Autore

    Full Text Available Nesprin-1 and nesprin-2 are nuclear envelope (NE proteins characterized by a common structure of an SR (spectrin repeat rod domain and a C-terminal transmembrane KASH [Klarsicht-ANC-Syne-homology] domain and display N-terminal actin-binding CH (calponin homology domains. Mutations in these proteins have been described in Emery-Dreifuss muscular dystrophy and attributed to disruptions of interactions at the NE with nesprins binding partners, lamin A/C and emerin. Evolutionary analysis of the rod domains of the nesprins has shown that they are almost entirely composed of unbroken SR-like structures. We present a bioinformatical approach to accurate definition of the boundaries of each SR by comparison with canonical SR structures, allowing for a large-scale homology modelling of the 74 nesprin-1 and 56 nesprin-2 SRs. The exposed and evolutionary conserved residues identify important pbs for protein-protein interactions that can guide tailored binding experiments. Most importantly, the bioinformatics analyses and the 3D models have been central to the design of selected constructs for protein expression. 1D NMR and CD spectra have been performed of the expressed SRs, showing a folded, stable, high content α-helical structure, typical of SRs. Molecular Dynamics simulations have been performed to study the structural and elastic properties of consecutive SRs, revealing insights in the mechanical properties adopted by these modules in the cell.

  16. Prospective large-scale field study generates predictive model identifying major contributors to colony losses.

    Science.gov (United States)

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J R; Ballam, Joan M

    2015-04-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health.

  17. Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale ensemble simulations

    Science.gov (United States)

    Heng, Y.; Hoffmann, L.; Griessbach, S.; Rößler, T.; Stein, O.

    2015-10-01

    An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often can not be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC) is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2). In the inverse modeling system MPTRAC is used to perform two types of simulations, i. e., large-scale ensemble simulations for the reconstruction of volcanic emissions and final transport simulations. The transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric Infrared Sounder (AIRS) satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI) near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite instruments. The final transport simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. The SO2 column densities from the simulations are in good qualitative agreement with the AIRS observations. Our new inverse modeling and simulation system is expected to become a useful tool to also study other volcanic

  18. Improving urban streamflow forecasting using a high-resolution large scale modeling framework

    Science.gov (United States)

    Read, Laura; Hogue, Terri; Gochis, David; Salas, Fernando

    2016-04-01

    Urban flood forecasting is a critical component in effective water management, emergency response, regional planning, and disaster mitigation. As populations across the world continue to move to cities (~1.8% growth per year), and studies indicate that significant flood damages are occurring outside the floodplain in urban areas, the ability to model and forecast flow over the urban landscape becomes critical to maintaining infrastructure and society. In this work, we use the Weather Research and Forecasting- Hydrological (WRF-Hydro) modeling framework as a platform for testing improvements to representation of urban land cover, impervious surfaces, and urban infrastructure. The three improvements we evaluate include: updating the land cover to the latest 30-meter National Land Cover Dataset, routing flow over a high-resolution 30-meter grid, and testing a methodology for integrating an urban drainage network into the routing regime. We evaluate performance of these improvements in the WRF-Hydro model for specific flood events in the Denver-Metro Colorado domain, comparing to historic gaged streamflow for retrospective forecasts. Denver-Metro provides an interesting case study as it is a rapidly growing urban/peri-urban region with an active history of flooding events that have caused significant loss of life and property. Considering that the WRF-Hydro model will soon be implemented nationally in the U.S. to provide flow forecasts on the National Hydrography Dataset Plus river reaches - increasing capability from 3,600 forecast points to 2.7 million, we anticipate that this work will support validation of this service in urban areas for operational forecasting. Broadly, this research aims to provide guidance for integrating complex urban infrastructure with a large-scale, high resolution coupled land-surface and distributed hydrologic model.

  19. Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale ensemble simulations

    Directory of Open Access Journals (Sweden)

    Y. Heng

    2015-10-01

    Full Text Available An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often can not be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2. In the inverse modeling system MPTRAC is used to perform two types of simulations, i. e., large-scale ensemble simulations for the reconstruction of volcanic emissions and final transport simulations. The transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric Infrared Sounder (AIRS satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS satellite instruments. The final transport simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. The SO2 column densities from the simulations are in good qualitative agreement with the AIRS observations. Our new inverse modeling and simulation system is expected to become a useful tool to also study

  20. Diagnosis of cirrus cloud occurrence using large-scale analysis data and a cloud-scale model

    Directory of Open Access Journals (Sweden)

    G. Cautenet

    Full Text Available The development of cirrus clouds is governed by large-scale synoptic movements such as updraft regions in convergence zones, but also by smaller scale features, for instance microphysical phenomena, entrainment, small-scale turbulence and radiative field, fall-out of the ice phase or wind shear. For this reason, the proper handling of cirrus life cycles is not an easy task using a large-scale model alone. We present some results from a small-scale cirrus cloud model initialized by ECMWF first-guess data, which prove more convenient for this task than the analyzed ones. This model is Starr's 2-D cirrus cloud model, where the rate of ice production/destruction is parametrized from environmental data. Comparison with satellite and local observations during the ICE89 experiment (North Sea shows that such an efficient model using large-scale data as input provides a reasonable diagnosis of cirrus occurrence in a given meteorological field. The main driving features are the updraft provided by the large-scale model, which enhances or inhibits the cloud development according to its sign, and the water vapour availability. The cloud fields retrieved are compared to satellite imagery. Finally, the use of a small-scale model in large-scale numerical studies is examined.

  1. A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling

    Directory of Open Access Journals (Sweden)

    Roger V Hoang

    2013-10-01

    Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.

  2. Perceptual Decision Making Through the Eyes of a Large-scale Neural Model of V1

    Directory of Open Access Journals (Sweden)

    Jianing eShi

    2013-04-01

    Full Text Available Sparse coding has been posited as an efficient information processing strategy employed by sensory systems, particularly visual cortex. Substantial theoretical and experimental work has focused on the issue of sparse encoding, namely how the early visual system maps the scene into a sparse representation. In this paper we investigate the complementary issue of sparse decoding, for example given activity generated by a realistic mapping of the visual scene to neuronal spike trains, how do downstream neurons best utilize this representation to generate a decision. Specifically we consider both sparse (L1 regularized and non-sparse (L2 regularized linear decoding for mapping the neural dynamics of a large-scale spiking neuron model of primary visual cortex (V1 to a two alternative forced choice (2-AFC perceptual decision. We show that while both sparse and non-sparse linear decoding yield discrimination results quantitatively consistent with human psychophysics, sparse linear decoding is more efficient in terms of the number of selected informative dimension.

  3. Large-scale infiltration experiments into unsaturated stratified loess sediments: Monitoring and modeling

    Science.gov (United States)

    Gvirtzman, Haim; Shalev, Eyal; Dahan, Ofer; Hatzor, Yossef H.

    2008-01-01

    SummaryTwo large-scale field experiments were conducted to track water flow through unsaturated stratified loess deposits. In the experiments, a trench was flooded with water, and water infiltration was allowed until full saturation of the sediment column, to a depth of 20 m, was achieved. The water penetrated through a sequence of alternating silty-sand and sandy-clay loess deposits. The changes in water content over time were monitored at 28 points beneath the trench, using time domain reflectometry (TDR) probes placed in four boreholes. Detailed records were obtained from a 21-day-period of wetting, followed by a 3-month-period of drying, and finally followed by a second 14-day-period of re-wetting. These processes were simulated using a two-dimensional numerical code that solves the flow equation. The model was calibrated using PEST. The simulations demonstrate that the propagation of the wetting front is hampered due to alternating silty-sand and sandy-clay loess layers. Moreover, wetting front propagation is further hampered by the extremely low values of the initial, unsaturated, hydraulic conductivity; thereby increasing the water content within the onion-shaped wetted zone up to full saturation. Numerical simulations indicate that above-hydrostatic pressure is developed within intermediate saturated layers, enhancing wetting front propagation.

  4. Large-Scale Protein-Protein Interactions Detection by Integrating Big Biosensing Data with Computational Model

    Directory of Open Access Journals (Sweden)

    Zhu-Hong You

    2014-01-01

    Full Text Available Protein-protein interactions are the basis of biological functions, and studying these interactions on a molecular level is of crucial importance for understanding the functionality of a living cell. During the past decade, biosensors have emerged as an important tool for the high-throughput identification of proteins and their interactions. However, the high-throughput experimental methods for identifying PPIs are both time-consuming and expensive. On the other hand, high-throughput PPI data are often associated with high false-positive and high false-negative rates. Targeting at these problems, we propose a method for PPI detection by integrating biosensor-based PPI data with a novel computational model. This method was developed based on the algorithm of extreme learning machine combined with a novel representation of protein sequence descriptor. When performed on the large-scale human protein interaction dataset, the proposed method achieved 84.8% prediction accuracy with 84.08% sensitivity at the specificity of 85.53%. We conducted more extensive experiments to compare the proposed method with the state-of-the-art techniques, support vector machine. The achieved results demonstrate that our approach is very promising for detecting new PPIs, and it can be a helpful supplement for biosensor-based PPI data detection.

  5. Extremely large-scale simulation of a Kardar-Parisi-Zhang model using graphics cards.

    Science.gov (United States)

    Kelling, Jeffrey; Ódo, Géza

    2011-12-01

    The octahedron model introduced recently has been implemented onto graphics cards, which permits extremely large-scale simulations via binary lattice gases and bit-coded algorithms. We confirm scaling behavior belonging to the two-dimensional Kardar-Parisi-Zhang universality class and find a surface growth exponent: β = 0.2415(15) on 2(17) × 2(17) systems, ruling out β = 1/4 suggested by field theory. The maximum speedup with respect to a single CPU is 240. The steady state has been analyzed by finite-size scaling and a growth exponent α = 0.393(4) is found. Correction-to-scaling-exponent are computed and the power-spectrum density of the steady state is determined. We calculate the universal scaling functions and cumulants and show that the limit distribution can be obtained by the sizes considered. We provide numerical fitting for the small and large tail behavior of the steady-state scaling function of the interface width.

  6. Repurposing of open data through large scale hydrological modelling - hypeweb.smhi.se

    Science.gov (United States)

    Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit

    2015-04-01

    Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). This presentation will give an example of how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. The model output is launched as new Open Data at the web site www.hypeweb.smhi.se to be used for (i) Climate change impact assessments on water resources and dynamics; (ii) The European Water Framework Directive (WFD) for characterization and development of measure programs to improve the ecological status of water bodies; (iii) Design variables for infrastructure constructions; (iv) Spatial water-resource mapping; (v) Operational forecasts (1-10 days and seasonal) on floods and droughts; (vi) Input to oceanographic models for operational forecasts and marine status assessments; (vii) Research. The following regional domains have been modelled so far with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The Hype web site provides several interactive web applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model

  7. Large-scale 3-D EM modelling with a Block Low-Rank multifrontal direct solver

    Science.gov (United States)

    Shantsev, Daniil V.; Jaysaval, Piyoosh; de la Kethulle de Ryhove, Sébastien; Amestoy, Patrick R.; Buttari, Alfredo; L'Excellent, Jean-Yves; Mary, Theo

    2017-06-01

    We put forward the idea of using a Block Low-Rank (BLR) multifrontal direct solver to efficiently solve the linear systems of equations arising from a finite-difference discretization of the frequency-domain Maxwell equations for 3-D electromagnetic (EM) problems. The solver uses a low-rank representation for the off-diagonal blocks of the intermediate dense matrices arising in the multifrontal method to reduce the computational load. A numerical threshold, the so-called BLR threshold, controlling the accuracy of low-rank representations was optimized by balancing errors in the computed EM fields against savings in floating point operations (flops). Simulations were carried out over large-scale 3-D resistivity models representing typical scenarios for marine controlled-source EM surveys, and in particular the SEG SEAM model which contains an irregular salt body. The flop count, size of factor matrices and elapsed run time for matrix factorization are reduced dramatically by using BLR representations and can go down to, respectively, 10, 30 and 40 per cent of their full-rank values for our largest system with N = 20.6 million unknowns. The reductions are almost independent of the number of MPI tasks and threads at least up to 90 × 10 = 900 cores. The BLR savings increase for larger systems, which reduces the factorization flop complexity from O(N2) for the full-rank solver to O(Nm) with m = 1.4-1.6. The BLR savings are significantly larger for deep-water environments that exclude the highly resistive air layer from the computational domain. A study in a scenario where simulations are required at multiple source locations shows that the BLR solver can become competitive in comparison to iterative solvers as an engine for 3-D controlled-source electromagnetic Gauss-Newton inversion that requires forward modelling for a few thousand right-hand sides.

  8. Large-scale Models Reveal the Two-component Mechanics of Striated Muscle

    Directory of Open Access Journals (Sweden)

    Robert Jarosch

    2008-12-01

    Full Text Available This paper provides a comprehensive explanation of striated muscle mechanics and contraction on the basis of filament rotations. Helical proteins, particularly the coiled-coils of tropomyosin, myosin and α-actinin, shorten their H-bonds cooperatively and produce torque and filament rotations when the Coulombic net-charge repulsion of their highly charged side-chains is diminished by interaction with ions. The classical “two-component model” of active muscle differentiated a “contractile component” which stretches the “series elastic component” during force production. The contractile components are the helically shaped thin filaments of muscle that shorten the sarcomeres by clockwise drilling into the myosin cross-bridges with torque decrease (= force-deficit. Muscle stretch means drawing out the thin filament helices off the cross-bridges under passive counterclockwise rotation with torque increase (= stretch activation. Since each thin filament is anchored by four elastic α-actinin Z-filaments (provided with forceregulating sites for Ca2+ binding, the thin filament rotations change the torsional twist of the four Z-filaments as the “series elastic components”. Large scale models simulate the changes of structure and force in the Z-band by the different Z-filament twisting stages A, B, C, D, E, F and G. Stage D corresponds to the isometric state. The basic phenomena of muscle physiology, i. e. latency relaxation, Fenn-effect, the force-velocity relation, the length-tension relation, unexplained energy, shortening heat, the Huxley-Simmons phases, etc. are explained and interpreted with the help of the model experiments.

  9. Identification of water quality degradation hotspots in developing countries by applying large scale water quality modelling

    Science.gov (United States)

    Malsy, Marcus; Reder, Klara; Flörke, Martina

    2014-05-01

    Decreasing water quality is one of the main global issues which poses risks to food security, economy, and public health and is consequently crucial for ensuring environmental sustainability. During the last decades access to clean drinking water increased, but 2.5 billion people still do not have access to basic sanitation, especially in Africa and parts of Asia. In this context not only connection to sewage system is of high importance, but also treatment, as an increasing connection rate will lead to higher loadings and therefore higher pressure on water resources. Furthermore, poor people in developing countries use local surface waters for daily activities, e.g. bathing and washing. It is thus clear that water utilization and water sewerage are indispensable connected. In this study, large scale water quality modelling is used to point out hotspots of water pollution to get an insight on potential environmental impacts, in particular, in regions with a low observation density and data gaps in measured water quality parameters. We applied the global water quality model WorldQual to calculate biological oxygen demand (BOD) loadings from point and diffuse sources, as well as in-stream concentrations. Regional focus in this study is on developing countries i.e. Africa, Asia, and South America, as they are most affected by water pollution. Hereby, model runs were conducted for the year 2010 to draw a picture of recent status of surface waters quality and to figure out hotspots and main causes of pollution. First results show that hotspots mainly occur in highly agglomerated regions where population density is high. Large urban areas are initially loading hotspots and pollution prevention and control become increasingly important as point sources are subject to connection rates and treatment levels. Furthermore, river discharge plays a crucial role due to dilution potential, especially in terms of seasonal variability. Highly varying shares of BOD sources across

  10. Large-scale modeling of reactive solute transport in fracture zones of granitic bedrocks

    Science.gov (United States)

    Molinero, Jorge; Samper, Javier

    2006-01-01

    Final disposal of high-level radioactive waste in deep repositories located in fractured granite formations is being considered by several countries. The assessment of the safety of such repositories requires using numerical models of groundwater flow, solute transport and chemical processes. These models are being developed from data and knowledge gained from in situ experiments such as the Redox Zone Experiment carried out at the underground laboratory of Äspö in Sweden. This experiment aimed at evaluating the effects of the construction of the access tunnel on the hydrogeological and hydrochemical conditions of a fracture zone intersected by the tunnel. Most chemical species showed dilution trends except for bicarbonate and sulphate which unexpectedly increased with time. Molinero and Samper [Molinero, J. and Samper, J. Groundwater flow and solute transport in fracture zones: an improved model for a large-scale field experiment at Äspö (Sweden). J. Hydraul. Res., 42, Extra Issue, 157-172] presented a two-dimensional water flow and solute transport finite element model which reproduced measured drawdowns and dilution curves of conservative species. Here we extend their model by using a reactive transport which accounts for aqueous complexation, acid-base, redox processes, dissolution-precipitation of calcite, quartz, hematite and pyrite, and cation exchange between Na + and Ca 2+. The model provides field-scale estimates of cation exchange capacity of the fracture zone and redox potential of groundwater recharge. It serves also to identify the mineral phases controlling the solubility of iron. In addition, the model is useful to test the relevance of several geochemical processes. Model results rule out calcite dissolution as the process causing the increase in bicarbonate concentration and reject the following possible sources of sulphate: (1) pyrite dissolution, (2) leaching of alkaline sulphate-rich waters from a nearby rock landfill and (3) dissolution of

  11. Large scale computational chemistry modeling of the oxidation of highly oriented pyrolytic graphite.

    Science.gov (United States)

    Poovathingal, Savio; Schwartzentruber, Thomas E; Srinivasan, Sriram Goverapet; van Duin, Adri C T

    2013-04-04

    Large scale molecular dynamics (MD) simulations are performed to study the oxidation of highly oriented pyrolytic graphite (HOPG) by hyperthermal atomic oxygen beam (5 eV). Simulations are performed using the ReaxFF classical reactive force field. We present here additional evidence that this method accurately reproduces ab initio derived energies relevant to HOPG oxidation. HOPG is modeled as multilayer graphene and etch-pit formation and evolution is directly simulated through a large number of sequential atomic oxygen collisions. The simulations predict that an oxygen coverage is first established that acts as a precursor to carbon-removal reactions, which ultimately etch wide but shallow pits, as observed in experiments. In quantitative agreement with experiment, the simulations predict the most abundant product species to be O2 (via recombination reactions), followed by CO2, with CO as the least abundant product species. Although recombination occurs all over the graphene sheet, the carbon-removal reactions occur only about the edges of the etch pit. Through isolated defect analysis on small graphene models as well as trajectory analysis performed directly on the predicted etch pit, the activation energies for the dominant reaction mechanisms leading to O2, CO2, and CO product species are determined to be 0.3, 0.52, and 0.67 eV, respectively. Overall, the qualitative and quantitative agreement between MD simulation and experiment is very promising. Thus, the MD simulation approach and C/H/O ReaxFF parametrization may be useful for simulating high-temperature gas interactions with graphitic materials where the microstructure is more complex than HOPG.

  12. Reheating in tachyonic inflationary models: Effects on the large scale curvature perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Rajeev Kumar, E-mail: rajeev.jain@unige.ch [Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019 (India); Chingangbam, Pravabati, E-mail: prava@iiap.res.in [Korea Institute for Advanced Study, 207-43 Cheongnyangni 2-dong, Dongdaemun-gu, Seoul 130-722 (Korea, Republic of); Sriramkumar, L., E-mail: sriram@physics.iitm.ac.in [Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019 (India)

    2011-11-11

    We investigate the problem of perturbative reheating and its effects on the evolution of the curvature perturbations in tachyonic inflationary models. We derive the equations governing the evolution of the scalar perturbations for a system consisting of a tachyon and a perfect fluid. Assuming the perfect fluid to be radiation, we solve the coupled equations for the system numerically and study the evolution of the perturbations from the sub-Hubble to the super-Hubble scales. In particular, we analyze the effects of the transition from tachyon driven inflation to the radiation dominated epoch on the evolution of the large scale curvature and non-adiabatic pressure perturbations. We consider two different potentials to describe the tachyon and study the effects of two possible types of decay of the tachyon into radiation. We plot the spectrum of curvature perturbations at the end of inflation as well as at the early stages of the radiation dominated epoch. We find that reheating does not affect the amplitude of the curvature perturbations in any of these cases. These results corroborate similar conclusions that have been arrived at earlier based on the study of the evolution of the perturbations in the super-Hubble limit. We illustrate that, before the transition to the radiation dominated epoch, the relative non-adiabatic pressure perturbation between the tachyon and radiation decays in a fashion very similar to that of the intrinsic entropy perturbation associated with the tachyon. Moreover, we show that, after the transition, the relative non-adiabatic pressure perturbation dies down extremely rapidly during the early stages of the radiation dominated epoch. It is these behavior which ensure that the amplitude of the curvature perturbations remain unaffected during reheating. We also discuss the corresponding results for the popular chaotic inflation model in the case of the canonical scalar field.

  13. Large-scale model-based assessment of deer-vehicle collision risk.

    Directory of Open Access Journals (Sweden)

    Torsten Hothorn

    Full Text Available Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining

  14. Model and controller reduction of large-scale structures based on projection methods

    Science.gov (United States)

    Gildin, Eduardo

    The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that

  15. A Scalar Field Dark Matter Model and Its Role in the Large-Scale Structure Formation in the Universe

    Directory of Open Access Journals (Sweden)

    Mario A. Rodríguez-Meza

    2012-01-01

    Full Text Available We present a model of dark matter based on scalar-tensor theory of gravity. With this scalar field dark matter model we study the non-linear evolution of the large-scale structures in the universe. The equations that govern the evolution of the scale factor of the universe are derived together with the appropriate Newtonian equations to follow the nonlinear evolution of the structures. Results are given in terms of the power spectrum that gives quantitative information on the large-scale structure formation. The initial conditions we have used are consistent with the so-called concordance ΛCDM model.

  16. Idealised modelling of storm surges in large-scale coastal basins

    NARCIS (Netherlands)

    Chen, WenLong

    2015-01-01

    Coastal areas around the world are frequently attacked by various types of storms, threatening human life and property. This study aims to understand storm surge processes in large-scale coastal basins, particularly focusing on the influences of geometry, topography and storm characteristics on the

  17. A Statistical Model for Hourly Large-Scale Wind and Photovoltaic Generation in New Locations

    DEFF Research Database (Denmark)

    Ekstrom, Jussi; Koivisto, Matti Juhani; Mellin, Ilkka

    2017-01-01

    The analysis of large-scale wind and photovoltaic (PV) energy generation is of vital importance in power systems where their penetration is high. This paper presents a modular methodology to assess the power generation and volatility of a system consisting of both PV plants (PVPs) and wind power ...

  18. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    Science.gov (United States)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  19. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    Energy Technology Data Exchange (ETDEWEB)

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally

  20. Large Scale Groundwater Flow Model for Ho Chi Minh City and its Catchment Area, Southern Vietnam

    Science.gov (United States)

    Sigrist, M.; Tokunaga, T.; Takizawa, S.

    2005-12-01

    Ho Chi Minh City (HCMC) has become a fast growing city in recent decades and is still growing at a high pace. The water demand for more than 7 million people has increased tremendously, too. Beside surface water, groundwater is used in big amounts to satisfy the need of water. By now, more than 200,000 wells have been developed with very little control. To investigate the sustainability of the water abstraction, a model had been built for the HCMC area and its surrounding. On the catchment scale (around 24,000km2); however, many questions have remained unsolved. In this study, we first gathered and complied geological and hydrogeological information as well as data on groundwater quality to get an idea on regional groundwater flow pattern and problems related to the temporal change of the groundwater situation. Two problems have been depicted by this study. One is the construction of a water reservoir upstream of the Saigon River. This construction has probably changed the water table of the unconfined aquifer, and hence, has significantly changed the properties of soils in some areas. The other problem is the distribution of salty groundwater. Despite the distance of more than 40km from the seashore, groundwater from some wells in and around HCMC shows high concentrations of chloride. Several wells started to produce non-potable water. The chloride concentrations show a complicated and patchy distribution below HCMC, suggesting the possibility of the remnant saltwater at the time of sediment deposition. On the other hand, seawater invades along the streams far beyond HCMC during the dry season and this might be one of the possible sources of salty groundwater by vertical infiltration. A large-scale geological model was constructed and transformed into a hydrogeological model to better understand and quantify the groundwater flow system and the origin of saltwater. Based on the constructed model and numerical calculation, we discuss the influence of reservoir

  1. Simulations of a Magnetic Fluctuation Driven Large Scale Dynamo and Comparison with a Two-scale Model

    CERN Document Server

    Park, Kiwan

    2012-01-01

    Models of large scale (magnetohydrodynamic) dynamos (LSD) which couple large scale field growth to total magnetic helicity evolution best predict the saturation of LSDs seen in simulations. For the simplest so called "{\\alpha}2" LSDs in periodic boxes, the electromotive force driving LSD growth depends on the difference between the time-integrated kinetic and current helicity associated with fluctuations. When the system is helically kinetically forced (KF), the growth of the large scale helical field is accompanied by growth of small scale magnetic (and current) helicity which ultimately quench the LSD. Here, using both simulations and theory, we study the complementary magnetically forced(MF) case in which the system is forced with an electric field that supplies magnetic helicity. For this MF case, the kinetic helicity becomes the back-reactor that saturates the LSD. Simulations of both MF and KF cases can be approximately modeled with the same equations of magnetic helicity evolution, but with complementa...

  2. Evaluation of a large-scale forest scenario model in heterogeneous forests: a case study for Switzerland

    NARCIS (Netherlands)

    Thürig, E.; Schelhaas, M.J.

    2006-01-01

    Large-scale forest scenario models are widely used to simulate the development of forests and to compare the carbon balance estimates of different countries. However, as site variability in the application area often exceeds the variability in the calibration area, model validation is important. The

  3. Application of large-scale, multi-resolution watershed modeling framework using the Hydrologic and Water Quality System (HAWQS)

    Science.gov (United States)

    In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...

  4. Research on a Small Signal Stability Region Boundary Model of the Interconnected Power System with Large-Scale Wind Power

    Directory of Open Access Journals (Sweden)

    Wenying Liu

    2015-03-01

    Full Text Available For the interconnected power system with large-scale wind power, the problem of the small signal stability has become the bottleneck of restricting the sending-out of wind power as well as the security and stability of the whole power system. Around this issue, this paper establishes a small signal stability region boundary model of the interconnected power system with large-scale wind power based on catastrophe theory, providing a new method for analyzing the small signal stability. Firstly, we analyzed the typical characteristics and the mathematic model of the interconnected power system with wind power and pointed out that conventional methods can’t directly identify the topological properties of small signal stability region boundaries. For this problem, adopting catastrophe theory, we established a small signal stability region boundary model of the interconnected power system with large-scale wind power in two-dimensional power injection space and extended it to multiple dimensions to obtain the boundary model in multidimensional power injection space. Thirdly, we analyzed qualitatively the topological property’s changes of the small signal stability region boundary caused by large-scale wind power integration. Finally, we built simulation models by DIgSILENT/PowerFactory software and the final simulation results verified the correctness and effectiveness of the proposed model.

  5. Computational Models of Consumer Confidence from Large-Scale Online Attention Data: Crowd-Sourcing Econometrics

    OpenAIRE

    Xianlei Dong; Johan Bollen

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's...

  6. Analysis of Lightning Electromagnetic Field on Large-scale Terrain Model using Three-dimensional MW-FDTD Parallel Computation

    Science.gov (United States)

    Oikawa, Takaaki; Sonoda, Jun; Sato, Motoyuki; Honma, Noriyasu; Ikegawa, Yutaka

    Analysis of lightning electromagnetic field using the FDTD method have been studied in recent year. However, large-scale three-dimensional analysis on real environment have not been considered, because the FDTD method has huge computational cost on large-scale analysis. So we have proposed a three-dimensional moving window FDTD (MW-FDTD) method with parallel computation. Our method use few computational cost than the conventional FDTD method and the original MW-FDTD method. In this paper, we have studied about computation performance of MW-FDTD parallel computation and large-scale three-dimensional analysis of lightning electromagnetic field on a real terrain model using our MW-FDTD with parallel computation.

  7. Turbulent spots in channel flow: an experimental study Large-scale flow, inner structure and low order model

    CERN Document Server

    Lemoult, Grégoire; Aider, Jean-Luc; Wesfreid, José Eduardo

    2013-01-01

    We present new experimental results on the development of turbulent spots in channel flow. The internal structure of a turbulent spot is measured, with Time Resolved Stereoscopic Particle Image Velocimetry. We report the observation of travelling-wave-like structures at the trailing edge of the turbulent spot. Special attention is paid to the large-scale flow surrounding the spot. We show that this large-scale flow is an asymmetric quadrupole centred on the spot. We measure the time evolution of the turbulent fluctuations and the mean flow distortions and compare these with the predictions of a nonlinear reduced order model predicting the main features of subcritical transition to turbulence.

  8. PREDICTIONS OF WAVE INDUCED SHIP MOTIONS AND LOADS BY LARGE-SCALE MODEL MEASUREMENT AT SEA AND NUMERICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Jialong Jiao

    2016-06-01

    Full Text Available In order to accurately predict wave induced motion and load responses of ships, a new experimental methodology is proposed. The new method includes conducting tests with large-scale models under natural environment conditions. The testing technique for large-scale model measurement proposed is quite applicable and general to a wide range of standard hydrodynamics experiments in naval architecture. In this study, a large-scale segmented self-propelling model allowed for investigating seakeeping performance and wave load behaviour as well as the testing systems were designed and experiments performed. A 2-hour voyage trial of the large-scale model aimed to perform a series of simulation exercises was carried out at Huludao harbour in October 2014. During the voyage, onboard systems, operated by crew, were used to measure and record the sea waves and the model responses. The post-voyage analysis of the measurements, both of the sea waves and the model’s responses, were made to predict the ship’s motion and load responses of short-term under the corresponding sea state. Furthermore, numerical analysis of short-term prediction was made by an in-house code and the result was compared with the experiment data. The long-term extreme prediction of motions and loads was also carried out based on the numerical results of short-term prediction.

  9. Lichen elemental content bioindicators for air quality in upper Midwest, USA: A model for large-scale monitoring

    Science.gov (United States)

    Susan Will-Wolf; Sarah Jovan; Michael C. Amacher

    2017-01-01

    Our development of lichen elemental bioindicators for a United States of America (USA) national monitoring program is a useful model for other large-scale programs. Concentrations of 20 elements were measured, validated, and analyzed for 203 samples of five common lichen species. Collections were made by trained non-specialists near 75 permanent plots and an expert...

  10. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties ...

  11. What causes differences between national estimates of forest management carbon emissions and removals compared to estimates of large - scale models?

    NARCIS (Netherlands)

    Groen, T.A.; Verkerk, P.J.; Böttcher, H.; Grassi, G.; Cienciala, E.; Black, K.G.; Fortin, M.; Köthke, M.; Lehtonen, A.; Nabuurs, G.J.; Petrova, L.; Blujdea, V.

    2013-01-01

    Under the United Nations Framework Convention for Climate Change all Parties have to report on carbon emissions and removals from the forestry sector. Each Party can use its own approach and country specific data for this. Independently, large-scale models exist (e.g. EFISCEN and G4M as used in this

  12. What causes the differences between national estimates of carbon emissions from forest management and large-scale models?

    NARCIS (Netherlands)

    Groen, T.A.; Verkerk, P.J.; Böttcher, H.; Grassi, G.; Cienciala, E.; Black, K.G.; Fortin, M.J.; Koethke, M.; Lethonen, A.; Nabuurs, G.J.; Petrova, L.; Blujdea, V.

    2013-01-01

    Under the United Nations Framework Convention for Climate Change all Parties have to report on carbon emissions and removals from the forestry sector. Each Party can use its own approach and country specific data for this. Independently, large-scale models exist (e.g. EFISCEN and G4M as used in this

  13. Development of lichen response indexes using a regional gradient modeling approach for large-scale monitoring of forests

    Science.gov (United States)

    Susan Will-Wolf; Peter Neitlich

    2010-01-01

    Development of a regional lichen gradient model from community data is a powerful tool to derive lichen indexes of response to environmental factors for large-scale and long-term monitoring of forest ecosystems. The Forest Inventory and Analysis (FIA) Program of the U.S. Department of Agriculture Forest Service includes lichens in its national inventory of forests of...

  14. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  15. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    evaluate two common model reduction approaches in an empirical case. The first relies on a principal component analysis (PCA) used to construct new orthogonal variables, which are applied in the hedonic model. The second relies on a stepwise model reduction based on the variance inflation index and Akaike......’s information criteria. Our empirical application focuses on estimating the implicit price of forest proximity in a Danish case area, with a dataset containing 86 relevant variables. We demonstrate that the estimated implicit price for forest proximity, while positive in all models, is clearly sensitive...

  16. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    Directory of Open Access Journals (Sweden)

    E. H. Sutanudjaja

    2011-09-01

    Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  17. Analysis and Design Environment for Large Scale System Models and Collaborative Model Development Project

    Data.gov (United States)

    National Aeronautics and Space Administration — As NASA modeling efforts grow more complex and more distributed among many working groups, new tools and technologies are required to integrate their efforts...

  18. Application of Large-Scale, Multi-Resolution Watershed Modeling Framework Using the Hydrologic and Water Quality System (HAWQS)

    OpenAIRE

    Haw Yen; Prasad Daggupati; White, Michael J.; Raghavan Srinivasan; Arndt Gossel; David Wells; Arnold, Jeffrey G

    2016-01-01

    In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources allocation, sediment transport, and pollution control. Among commonly adopted models, the Soil and Water Assessment Tool (SWAT) has been demonstrated to provide superior performance with a large amount...

  19. Application of Large-Scale, Multi-Resolution Watershed Modeling Framework Using the Hydrologic and Water Quality System (HAWQS)

    OpenAIRE

    Haw Yen; Prasad Daggupati; White, Michael J.; Raghavan Srinivasan; Arndt Gossel; David Wells; Arnold, Jeffrey G

    2016-01-01

    In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources allocation, sediment transport, and pollution control. Among commonly adopted models, the Soil and Water Assessment Tool (SWAT) has been demonstrated to provide superior performance with a large amount...

  20. Large-scale 3-D modeling by integration of resistivity models and borehole data through inversion

    DEFF Research Database (Denmark)

    Foged, N.; Marker, Pernille Aabye; Christiansen, A. V.;

    2014-01-01

    resistivity and the clay fraction. Through inversion we use the lithological data and the resistivity data to determine the optimum spatially distributed translator function. Applying the translator function we get a 3-D clay fraction model, which holds information from the resistivity data set....... The parameter of interest is the clay fraction, expressed as the relative length of clay units in a depth interval. The clay fraction is obtained from lithological logs and the clay fraction from the resistivity is obtained by establishing a simple petrophysical relationship, a translator function, between...... and the borehole data set in one variable. Finally, we use k-means clustering to generate a 3-D model of the subsurface structures. We apply the procedure to the Norsminde survey in Denmark, integrating approximately 700 boreholes and more than 100 000 resistivity models from an airborne survey...

  1. Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models

    Science.gov (United States)

    Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.

    2012-12-01

    The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).

  2. High temperature thermal behaviour modeling of large-scale fused silica optics for laser facility

    Institute of Scientific and Technical Information of China (English)

    Yu Jing-Xia; He Shao-Bo; Xiang Xia; Yuan Xiao-Dong; Zheng Wan-Guo; Lü Hai-Bing; Zu Xiao-Tao

    2012-01-01

    High temperature annealing is often used for the stress control of optical materials.However,weight and viscosity at high temperature may destroy the surface morphology,especially for the large-scale,thin and heavy optics used for large laser facilities.It is necessary to understand the thermal behaviour and design proper support systems for large-scale optics at high temperature.In this work,three support systems for fused silica optics are designed and simulated with the finite element method.After the analysis of the thermal behaviours of different support systems,some advantages and disadvantages can be revealed.The results show that the support with the optical surface vertical is optimal because both pollution and deformation of optics could be well controlled during annealing at high temperature.Annealing process of the optics irradiated by CO2 laser is also simulated.It can be concluded that high temperature annealing can effectively reduce the residual stress.However,the effects of annealing on surface morphology of the optics are complex.Annealing creep is closely related to the residual stress and strain distribution.In the region with large residual stress,the creep is too large and probably increases the deformation gradient which may affect the laser beam propagation.

  3. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Science.gov (United States)

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  4. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Directory of Open Access Journals (Sweden)

    Xianlei Dong

    Full Text Available Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  5. Observational Features of Large-Scale Structures as Revealed by the Catastrophe Model of Solar Eruptions

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Large-scale magnetic structures are the main carrier of major eruptions in the solar atmosphere. These structures are rooted in the photosphere and are driven by the unceasing motion of the photospheric material through a series of equilibrium configurations. The motion brings energy into the coronal magnetic field until the system ceases to be in equilibrium. The catastrophe theory for solar eruptions indicates that loss of mechanical equilibrium constitutes the main trigger mechanism of major eruptions, usually shown up as solar flares,eruptive prominences, and coronal mass ejections (CMEs). Magnetic reconnection which takes place at the very beginning of the eruption as a result of plasma instabilities/turbulence inside the current sheet, converts magnetic energy into heating and kinetic energy that are responsible for solar flares, and for accelerating both plasma ejecta (flows and CMEs) and energetic particles. Various manifestations are thus related to one another, and the physics behind these relationships is catastrophe and magnetic reconnection. This work reports on recent progress in both theoretical research and observations on eruptive phenomena showing the above manifestations. We start by displaying the properties of large-scale structures in the corona and the related magnetic fields prior to an eruption, and show various morphological features of the disrupting magnetic fields. Then, in the framework of the catastrophe theory,we look into the physics behind those features investigated in a succession of previous works,and discuss the approaches they used.

  6. Sensitivities of Cumulus-Ensemble Rainfall in a Cloud-Resolving Model with Parameterized Large-Scale Dynamics.

    Science.gov (United States)

    Mapes, Brian E.

    2004-09-01

    The problem of closure in cumulus parameterization requires an understanding of the sensitivities of convective cloud systems to their large-scale setting. As a step toward such an understanding, this study probes some sensitivities of a simulated ensemble of convective clouds in a two-dimensional cloud-resolving model (CRM). The ensemble is initially in statistical equilibrium with a steady imposed background forcing (cooling and moistening). Large-scale stimuli are imposed as horizontally uniform perturbations nudged into the model fields over 10 min, and the rainfall response of the model clouds is monitored.In order to reduce a major source of artificial insensitivity in the CRM, a simple parameterization scheme is devised to account for heating-induced large-scale (i.e., domain averaged) vertical motions that would develop in nature but are forbidden by the periodic boundary conditions. The effects of this large-scale vertical motion are parameterized as advective tendency terms that are applied as a uniform forcing throughout the domain, just like the background forcing. This parameterized advection is assumed to lag rainfall (used as a proxy for heating) by a specified time scale. The time scale determines (via a gravity wave space time conversion factor) the size of the large-scale region represented by the periodic CRM domain, which can be of arbitrary size or dimensionality.The sensitivity of rain rate to deep cooling and moistening, representing an upward displacement by a large-scale wave of first baroclinic mode structure, is positive. Near linearity is found for ±1 K perturbations, and the sensitivity is about equally divided between temperature and moisture effects. For a second baroclinic mode (vertical dipole) displacement, the sign of the perturbation in the lower troposphere dominates the convective response. In this dipole case, the initial sensitivity is very large, but quantitative results are distorted by the oversimplified large-scale

  7. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    Science.gov (United States)

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  8. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    Science.gov (United States)

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  9. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2014-01-01

    Full Text Available This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  10. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  11. Large-scale, realistic laboratory modeling of M2 internal tide generation at the Luzon Strait

    CERN Document Server

    Mercier, Matthieu J; Helfrich, Karl; Sommeria, Joël; Viboud, Samuel; Didelle, Henri; Saidi, Sasan; Dauxois, Thierry; Peacock, Thomas

    2015-01-01

    The complex double-ridge system in the Luzon Strait in the South China Sea (SCS) is one of the strongest sources of internal tides in the oceans, associated with which are some of the largest amplitude internal solitary waves on record. An issue of debate, however, has been the specific nature of their generation mechanism. To provide insight, we present the results of a large-scale laboratory experiment performed at the Coriolis platform. The experiment was carefully designed so that the relevant dimensionless parameters, which include the excursion parameter, criticality, Rossby, and Froude numbers, closely matched the ocean scenario. The results advocate that a broad and coherent weakly nonlinear, three-dimensional, M2 internal tide that is shaped by the overall geometry of the double-ridge system is radiated into the South China Sea and subsequently steepens, as opposed to being generated by a particular feature or localized region within the ridge system.

  12. Numerical modeling of in-vessel melt water interaction in large scale PWR`s

    Energy Technology Data Exchange (ETDEWEB)

    Kolev, N.I. [Siemens AG, KWU NA-M, Erlangen (Germany)

    1998-01-01

    This paper presents a comparison between IVA4 simulations and FARO L14, L20 experiments. Both experiments were performed with the same geometry but under different initial pressures, 51 and 20 bar respectively. A pretest prediction for test L21 which is intended to be performed under an initial pressure of 5 bar is also presented. The strong effect of the volume expansion of the evaporating water at low pressure is demonstrated. An in-vessel simulation for a 1500 MW el. PWR is presented. The insight gained from this study is: that at no time are conditions for the feared large scale melt-water intermixing at low pressure in force, with this due to the limiting effect of the expansion process which accelerates the melt and the water into all available flow paths. (author)

  13. Experimental Investigation of Wave-Induced Ship Hydroelastic Vibrations by Large-Scale Model Measurement in Coastal Waves

    Directory of Open Access Journals (Sweden)

    Jialong Jiao

    2016-01-01

    Full Text Available Ship hydroelastic vibration is an issue involving mutual interactions among inertial, hydrodynamic, and elastic forces. The conventional laboratory tests for wave-induced hydroelastic vibrations of ships are performed in tank conditions. An alternative approach to the conventional laboratory basin measurement, proposed in this paper, is to perform tests by large-scale model measurement in real sea waves. In order to perform this kind of novel experimental measurement, a large-scale free running model and the experiment scheme are proposed and introduced. The proposed testing methodology is quite general and applicable to a wide range of ship hydrodynamic experimental research. The testing procedure is presented by illustrating a 5-hour voyage trial of the large-scale model carried out at Huludao harbor of China in August 2015. Hammer tests were performed to identify the natural frequencies of the ship model at the beginning of the tests. Then a series of tests under different sailing conditions were carried out to investigate the vibrational characteristics of the model. As a postvoyage analysis, load, pressure, acceleration, and motion responses of the model are studied with respect to different time durations based on the measured data.

  14. Estimating Route Choice Models from Stochastically Generated Choice Sets on Large-Scale Networks Correcting for Unequal Sampling Probability

    DEFF Research Database (Denmark)

    Vacca, Alessandro; Prato, Carlo Giacomo; Meloni, Italo

    2015-01-01

    is the dependency of the parameter estimates from the choice set generation technique. Bias introduced in model estimation has been corrected only for the random walk algorithm, which has problematic applicability to large-scale networks. This study proposes a correction term for the sampling probability of routes...... extracted with stochastic route generation. The term is easily applicable to large-scale networks and various environments, given its dependence only on a random number generator and the Dijkstra shortest path algorithm. The implementation for revealed preferences data, which consist of actual route choices...... collected in Cagliari, Italy, shows the feasibility of generating routes stochastically in a high-resolution network and calculating the correction factor. The model estimation with and without correction illustrates how the correction not only improves the goodness of fit but also turns illogical signs...

  15. Descriptor-variable approach to modeling and optimization of large-scale systems. Final report, March 1976--February 1979

    Energy Technology Data Exchange (ETDEWEB)

    Stengel, D N; Luenberger, D G; Larson, R E; Cline, T B

    1979-02-01

    A new approach to modeling and analysis of systems is presented that exploits the underlying structure of the system. The development of the approach focuses on a new modeling form, called 'descriptor variable' systems, that was first introduced in this research. Key concepts concerning the classification and solution of descriptor-variable systems are identified, and theories are presented for the linear case, the time-invariant linear case, and the nonlinear case. Several standard systems notions are demonstrated to have interesting interpretations when analyzed via descriptor-variable theory. The approach developed also focuses on the optimization of large-scale systems. Descriptor variable models are convenient representations of subsystems in an interconnected network, and optimization of these models via dynamic programming is described. A general procedure for the optimization of large-scale systems, called spatial dynamic programming, is presented where the optimization is spatially decomposed in the way standard dynamic programming temporally decomposes the optimization of dynamical systems. Applications of this approach to large-scale economic markets and power systems are discussed.

  16. Context-dependent encoding of fear and extinction memories in a large-scale network model of the basal amygdala.

    OpenAIRE

    Ioannis Vlachos; Cyril Herry; Andreas Lüthi; Ad Aertsen; Arvind Kumar

    2011-01-01

    International audience; The basal nucleus of the amygdala (BA) is involved in the formation of context-dependent conditioned fear and extinction memories. To understand the underlying neural mechanisms we developed a large-scale neuron network model of the BA, composed of excitatory and inhibitory leaky-integrate-and-fire neurons. Excitatory BA neurons received conditioned stimulus (CS)-related input from the adjacent lateral nucleus (LA) and contextual input from the hippocampus or medial pr...

  17. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    Science.gov (United States)

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  18. Analysis of effectiveness of possible queuing models at gas stations using the large-scale queuing theory

    Directory of Open Access Journals (Sweden)

    Slaviša M. Ilić

    2011-10-01

    Full Text Available This paper analyzes the effectiveness of possible models for queuing at gas stations, using a mathematical model of the large-scale queuing theory. Based on actual data collected and the statistical analysis of the expected intensity of vehicle arrivals and queuing at gas stations, the mathematical modeling of the real process of queuing was carried out and certain parameters quantified, in terms of perception of the weaknesses of the existing models and the possible benefits of an automated queuing model.

  19. A stochastic location and allocation model for critical items to response large-scale emergencies: A case of Turkey

    Directory of Open Access Journals (Sweden)

    Erkan Celik

    2016-10-01

    Full Text Available This paper aims to decide on the number of facilities and their locations, procurement for pre and post-disaster, and allocation to mitigate the effects of large-scale emergencies. A two-stage stochastic mixed integer programming model is proposed that combines facility location- prepositioning, decisions on pre-stocking levels for emergency supplies, and allocation of located distribution centers (DCs to affected locations and distribution of those supplies to several demand locations after large-scale emergencies with uncertainty in demand. Also, the use of the model is demonstrated through a case study for prepositioning of supplies in probable large-scale emergencies in the eastern and southeastern Anatolian sides of Turkey. The results provide a framework for relief organizations to determine the location and number of DCs in different settings, by using the proposed model considering the main parameters, as; capacity of facilities, probability of being affected for each demand points, severity of events, maximum distance between a demand point and distribution center.

  20. Symmetry in stochasticity: Random walk models of large-scale structure

    Indian Academy of Sciences (India)

    Ravi K Sheth

    2011-07-01

    This paper describes the insights gained from the excursion set approach, in which various questions about the phenomenology of large-scale structure formation can be mapped to problems associated with the first crossing distribution of appropriately defined barriers by random walks. Much of this is summarized in R K Sheth, AIP Conf. Proc. 1132, 158 (2009). So only a summary is given here, and instead a few new excursion set related ideas and results which are not published elsewhere are presented. One is a generalization of the formation time distribution to the case in which formation corresponds to the time when half the mass was first assembled in pieces, each of which was at least 1/ times the final mass, and where ≥ 2; another is an analysis of the first crossing distribution of the Ornstein–Uhlenbeck process. The first derives from the mirror-image symmetry argument for random walks which Chandrasekhar described so elegantly in 1943; the second corrects a misuse of this argument. Finally, some discussion of the correlated steps and correlated walks assumptions associated with the excursion set approach, and the relation between these and peaks theory are also included. These are problems in which Chandra’s mirror-image symmetry is broken.

  1. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  2. Dynamic Modeling and Analysis of the Large-Scale Rotary Machine with Multi-Supporting

    Directory of Open Access Journals (Sweden)

    Xuejun Li

    2011-01-01

    Full Text Available The large-scale rotary machine with multi-supporting, such as rotary kiln and rope laying machine, is the key equipment in the architectural, chemistry, and agriculture industries. The body, rollers, wheels, and bearings constitute a chain multibody system. Axis line deflection is a vital parameter to determine mechanics state of rotary machine, thus body axial vibration needs to be studied for dynamic monitoring and adjusting of rotary machine. By using the Riccati transfer matrix method, the body system of rotary machine is divided into many subsystems composed of three elements, namely, rigid disk, elastic shaft, and linear spring. Multiple wheel-bearing structures are simplified as springs. The transfer matrices of the body system and overall transfer equation are developed, as well as the response overall motion equation. Taken a rotary kiln as an instance, natural frequencies, modal shape, and response vibration with certain exciting axis line deflection are obtained by numerical computing. The body vibration modal curves illustrate the cause of dynamical errors in the common axis line measurement methods. The displacement response can be used for further measurement dynamical error analysis and compensation. The response overall motion equation could be applied to predict the body motion under abnormal mechanics condition, and provide theory guidance for machine failure diagnosis.

  3. A low-dimensional model predicting geometry-dependent dynamics of large-scale coherent structures in turbulence

    CERN Document Server

    Bai, Kunlun; Brown, Eric

    2015-01-01

    We test the ability of a general low-dimensional model for turbulence to predict geometry-dependent dynamics of large-scale coherent structures, such as convection rolls. The model consists of stochastic ordinary differential equations, which are derived as a function of boundary geometry from the Navier-Stokes equations (Brown and Ahlers 2008). We test the model using Rayleigh-B\\'enard convection experiments in a cubic container. The model predicts a new mode in which the alignment of a convection roll switches between diagonals. We observe this mode with a measured switching rate within 30% of the prediction.

  4. Using SMOS for validation and parameter estimation of a large scale hydrological model in Paraná river basin

    Science.gov (United States)

    Colossi, Bibiana; Fleischmann, Ayan; Siqueira, Vinicius; Bitar, Ahmad Al; Paiva, Rodrigo; Fan, Fernando; Ruhoff, Anderson; Pontes, Paulo; Collischonn, Walter

    2017-04-01

    Large scale representation of soil moisture conditions can be achieved through hydrological simulation and remote sensing techniques. However, both methodologies have several limitations, which suggests the potential benefits of using both information together. So, this study had two main objectives: perform a cross-validation between remotely sensed soil moisture from SMOS (Soil Moisture and Ocean Salinity) L3 product and soil moisture simulated with the large scale hydrological model MGB-IPH; and to evaluate the potential benefits of including remotely sensed soil moisture for model parameter estimation. The study analyzed results in South American continent, where hydrometeorological monitoring is usually scarce. The study was performed in Paraná River Basin, an important South American basin, whose extension and particular characteristics allow the representation of different climatic, geological, and, consequently, hydrological conditions. Soil moisture estimated with SMOS was transformed from water content to a Soil Water Index (SWI) so it is comparable to the saturation degree simulated with MGB-IPH model. The multi-objective complex evolution algorithm (MOCOM-UA) was applied for model automatic calibration considering only remotely sensed soil moisture, only discharge and both information together. Results show that this type of analysis can be very useful, because it allows to recognize limitations in model structure. In the case of the hydrological model calibration, this approach can avoid the use of parameters out of range, in an attempt to compensate model limitations. Also, it indicates aspects of the model were efforts should be concentrated, in order to improve hydrological or hydraulics process representation. Automatic calibration gives an estimative about the way different information can be applied and the quality of results it might lead. We emphasize that these findings can be valuable for hydrological modeling in large scale South American

  5. Hydrogen production by steam reforming of DME in a large scale CFB reactor. Part I:computational model and predictions

    OpenAIRE

    Elewuwa, Francis A.; Makkawi, Yassir T.

    2015-01-01

    This study presents a computational fluid dynamic (CFD) study of Dimethyl Ether steam reforming (DME-SR) in a large scale Circulating Fluidized Bed (CFB) reactor. The CFD model is based on Eulerian-Eulerian dispersed flow and solved using commercial software (ANSYS FLUENT). The DME-SR reactions scheme and kinetics in the presence of a bifunctional catalyst of CuO/ZnO/Al2O3+ZSM-5 were incorporated in the model using in-house developed user-defined function. The model was validated by comparing...

  6. UAS in the NAS Project: Large-Scale Communication Architecture Simulations with NASA GRC Gen5 Radio Model

    Science.gov (United States)

    Kubat, Gregory

    2016-01-01

    This report provides a description and performance characterization of the large-scale, Relay architecture, UAS communications simulation capability developed for the NASA GRC, UAS in the NAS Project. The system uses a validated model of the GRC Gen5 CNPC, Flight-Test Radio model. Contained in the report is a description of the simulation system and its model components, recent changes made to the system to improve performance, descriptions and objectives of sample simulations used for test and verification, and a sampling and observations of results and performance data.

  7. Can key vegetation parameters be retrieved at the large-scale using LAI satellite products and a generic modelling approach ?

    Science.gov (United States)

    Dewaele, Helene; Calvet, Jean-Christophe; Carrer, Dominique; Laanaia, Nabil

    2016-04-01

    In the context of climate change, the need to assess and predict the impact of droughts on vegetation and water resources increases. The generic approaches permitting the modelling of continental surfaces at large-scale has progressed in recent decades towards land surface models able to couple cycles of water, energy and carbon. A major source of uncertainty in these generic models is the maximum available water content of the soil (MaxAWC) usable by plants which is constrained by the rooting depth parameter and unobservable at the large-scale. In this study, vegetation products derived from the SPOT/VEGETATION satellite data available since 1999 are used to optimize the model rooting depth over rainfed croplands and permanent grasslands at 1 km x 1 km resolution. The inter-annual variability of the Leaf Area Index (LAI) is simulated over France using the Interactions between Soil, Biosphere and Atmosphere, CO2-reactive (ISBA-A-gs) generic land surface model and a two-layer force-restore (FR-2L) soil profile scheme. The leaf nitrogen concentration directly impacts the modelled value of the maximum annual LAI. In a first step this parameter is estimated for the last 15 years by using an iterative procedure that matches the maximum values of LAI modelled by ISBA-A-gs to the highest satellite-derived LAI values. The Root Mean Square Error (RMSE) is used as a cost function to be minimized. In a second step, the model rooting depth is optimized in order to reproduce the inter-annual variability resulting from the drought impact on the vegetation. The evaluation of the retrieved soil rooting depth is achieved using the French agricultural statistics of Agreste. Retrieved leaf nitrogen concentrations are compared with values from previous studies. The preliminary results show a good potential of this approach to estimate these two vegetation parameters (leaf nitrogen concentration, MaxAWC) at the large-scale over grassland areas. Besides, a marked impact of the

  8. Influence of weathering and pre-existing large scale fractures on gravitational slope failure: insights from 3-D physical modelling

    Directory of Open Access Journals (Sweden)

    D. Bachmann

    2004-01-01

    Full Text Available Using a new 3-D physical modelling technique we investigated the initiation and evolution of large scale landslides in presence of pre-existing large scale fractures and taking into account the slope material weakening due to the alteration/weathering. The modelling technique is based on the specially developed properly scaled analogue materials, as well as on the original vertical accelerator device enabling increases in the 'gravity acceleration' up to a factor 50. The weathering primarily affects the uppermost layers through the water circulation. We simulated the effect of this process by making models of two parts. The shallower one represents the zone subject to homogeneous weathering and is made of low strength material of compressive strength σl. The deeper (core part of the model is stronger and simulates intact rocks. Deformation of such a model subjected to the gravity force occurred only in its upper (low strength layer. In another set of experiments, low strength (σw narrow planar zones sub-parallel to the slope surface (σwl were introduced into the model's superficial low strength layer to simulate localized highly weathered zones. In this configuration landslides were initiated much easier (at lower 'gravity force', were shallower and had smaller horizontal size largely defined by the weak zone size. Pre-existing fractures were introduced into the model by cutting it along a given plan. They have proved to be of small influence on the slope stability, except when they were associated to highly weathered zones. In this latter case the fractures laterally limited the slides. Deep seated rockslides initiation is thus directly defined by the mechanical structure of the hillslope's uppermost levels and especially by the presence of the weak zones due to the weathering. The large scale fractures play a more passive role and can only influence the shape and the volume of the sliding units.

  9. Association of parameter, software, and hardware variation with large-scale behavior across 57,000 climate models.

    Science.gov (United States)

    Knight, Christopher G; Knight, Sylvia H E; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J; Kettleborough, Jamie A; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A; Allen, Myles R

    2007-07-24

    In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally.

  10. Large-scale features of Pliocene climate: results from the Pliocene Model Intercomparison Project

    OpenAIRE

    A. M. Haywood; D. J. Hill; Dolan, A. M.; B. L. Otto-Bliesner; F. Bragg; Chan, W.-L.; Chandler, M. A.; Contoux, C.; H. J. Dowsett; A. Jost; Y. Kamae; Lohmann, G.; Lunt, D. J.; Abe-Ouchi, A.; Pickering, S.J.

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied. Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-model/data intercomparison. Whilst commonalities in model outputs for the Pliocene are cle...

  11. The Effects of Uncertainty in Speed-Flow Curve Parameters on a Large-Scale Model

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2014-01-01

    Uncertainty is inherent in transport models and prevents the use of a deterministic approach when traffic is modeled. Quantifying uncertainty thus becomes an indispensable step to produce a more informative and reliable output of transport models. In traffic assignment models, volume-delay functi......Uncertainty is inherent in transport models and prevents the use of a deterministic approach when traffic is modeled. Quantifying uncertainty thus becomes an indispensable step to produce a more informative and reliable output of transport models. In traffic assignment models, volume...

  12. The Effects of Uncertainty in Speed-Flow Curve Parameters on a Large-Scale Model

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2014-01-01

    Uncertainty is inherent in transport models and prevents the use of a deterministic approach when traffic is modeled. Quantifying uncertainty thus becomes an indispensable step to produce a more informative and reliable output of transport models. In traffic assignment models, volume-delay functi......Uncertainty is inherent in transport models and prevents the use of a deterministic approach when traffic is modeled. Quantifying uncertainty thus becomes an indispensable step to produce a more informative and reliable output of transport models. In traffic assignment models, volume...... uncertainty. This aspect is evident particularly for stretches of the network with a high number of competing routes. Model sensitivity was also tested for BPR parameter uncertainty combined with link capacity uncertainty. The resultant increase in model sensitivity demonstrates even further the importance...

  13. Development of Residential Prototype Building Models and Analysis System for Large-Scale Energy Efficiency Studies Using EnergyPlus

    Energy Technology Data Exchange (ETDEWEB)

    Mendon, Vrushali V.; Taylor, Zachary T.

    2014-09-10

    ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype building models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.

  14. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  15. Investigation of Large Scale Cortical Models on Clustered Multi-Core Processors

    Science.gov (United States)

    2013-02-01

    Acceleration platforms examined. x86 Cell GPGPU HTM [22] × × Dean [25] × × Izhikevich [26] × × × Hodgkin- Huxley [27] × × × Morris Lecar [28...examined. These are the Hodgkin- Huxley [27], Izhikevich [26], Wilson [29], and Morris-Lecar [28] models. The Hodgkin– Huxley model is considered to be...and inactivation of Na currents). Table 2 compares the computation properties of the four models. The Hodgkin– Huxley model utilizes exponential

  16. Large-scale features of Pliocene climate: results from the Pliocene Model Intercomparison Project

    Directory of Open Access Journals (Sweden)

    A. M. Haywood

    2012-07-01

    Full Text Available Climate and environments of the mid-Pliocene Warm Period (3.264 to 3.025 Ma have been extensively studied. Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a co-ordinated multi-model and multi-model/data intercomparison. Whilst commonalities in model outputs for the Pliocene are evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data/model comparison highlights the potential for models to underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Sensitivity tests exploring the "known unknowns" in modelling Pliocene climate specifically relevant to the high-latitudes are also essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses. Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS, suggest that ESS is greater than Climate Sensitivity (CS, and that the ratio of ESS to CS is between 1 and 2, with a best estimate of 1.5.

  17. Large-scale features of Pliocene climate: results from the Pliocene Model Intercomparison Project

    Directory of Open Access Journals (Sweden)

    A. M. Haywood

    2013-01-01

    Full Text Available Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma have been extensively studied. Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-model/data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data/model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses. Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS, support previous work suggesting that ESS is greater than Climate Sensitivity (CS, and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  18. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  19. Large-Scale Features of Pliocene Climate: Results from the Pliocene Model Intercomparison Project

    Science.gov (United States)

    Haywood, A. M.; Hill, D.J.; Dolan, A. M.; Otto-Bliesner, B. L.; Bragg, F.; Chan, W.-L.; Chandler, M. A.; Contoux, C.; Dowsett, H. J.; Jost, A.; Kamae, Y.; Lohmann, G.; Lunt, D. J.; Abe-Ouchi, A.; Pickering, S. J.; Ramstein, G.; Rosenbloom, N. A.; Salzmann, U.; Sohl, L.; Stepanek, C.; Ueda, H.; Yan, Q.; Zhang, Z.

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied.Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-mode data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses). Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS), support previous work suggesting that ESS is greater than Climate Sensitivity (CS), and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  20. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    Science.gov (United States)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  1. Assessment of climate change impacts on rainfall using large scale climate variables and downscaling models – A case study

    Indian Academy of Sciences (India)

    Azadeh Ahmadi; Ali Moridi; Elham Kakaei Lafdani; Ghasem Kianpisheh

    2014-10-01

    Many of the applied techniques in water resources management can be directly or indirectly influenced by hydro-climatology predictions. In recent decades, utilizing the large scale climate variables as predictors of hydrological phenomena and downscaling numerical weather ensemble forecasts has revolutionized the long-lead predictions. In this study, two types of rainfall prediction models are developed to predict the rainfall of the Zayandehrood dam basin located in the central part of Iran. The first seasonal model is based on large scale climate signals data around the world. In order to determine the inputs of the seasonal rainfall prediction model, the correlation coefficient analysis and the new Gamma Test (GT) method are utilized. Comparison of modelling results shows that the Gamma test method improves the Nash–Sutcliffe efficiency coefficient of modelling performance as 8% and 10% for dry and wet seasons, respectively. In this study, Support Vector Machine (SVM) model for predicting rainfall in the region has been used and its results are compared with the benchmark models such as K-nearest neighbours (KNN) and Artificial Neural Network (ANN). The results show better performance of the SVM model at testing stage. In the second model, statistical downscaling model (SDSM) as a popular downscaling tool has been used. In this model, using the outputs from GCM, the rainfall of Zayandehrood dam is projected under two climate change scenarios. Most effective variables have been identified among 26 predictor variables. Comparison of the results of the two models shows that the developed SVM model has lesser errors in monthly rainfall estimation. The results show that the rainfall in the future wet periods are more than historical values and it is lower than historical values in the dry periods. The highest monthly uncertainty of future rainfall occurs in March and the lowest in July.

  2. Development and application of a large scale river system model for National Water Accounting in Australia

    Science.gov (United States)

    Dutta, Dushmanta; Vaze, Jai; Kim, Shaun; Hughes, Justin; Yang, Ang; Teng, Jin; Lerat, Julien

    2017-04-01

    Existing global and continental scale river models, mainly designed for integrating with global climate models, are of very coarse spatial resolutions and lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing water accounts, which have become increasingly important for water resources planning and management at regional and national scales. A continental scale river system model called Australian Water Resource Assessment River System model (AWRA-R) has been developed and implemented for national water accounting in Australia using a node-link architecture. The model includes major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. Two key components of the model are an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. The results in the Murray-Darling Basin shows highly satisfactory performance of the model with median daily Nash-Sutcliffe Efficiency (NSE) of 0.64 and median annual bias of less than 1% for the period of calibration (1970-1991) and median daily NSE of 0.69 and median annual bias of 12% for validation period (1992-2014). The results have demonstrated that the performance of the model is less satisfactory when the key processes such as overbank flow, groundwater seepage and irrigation diversion are switched off. The AWRA-R model, which has been operationalised by the Australian Bureau of Meteorology for continental scale water accounting, has contributed to improvements in the national water account by substantially reducing accounted different volume (gain/loss).

  3. An application of a large scale conceptual hydrological model over the Elbe region

    Directory of Open Access Journals (Sweden)

    M. Lobmeyr

    1999-01-01

    Full Text Available This paper investigates the ability of the VIC-2L model coupled to a routing model to reproduce streamflow in the catchment of the lower Elbe River, Germany. The VIC-2L model, a hydrologically-based land surface scheme (LSS which has been tested extensively in the Project for Intercomparison of Land-surface Parameterization Schemes (PILPS, is put up on the rotated grid of 1/6 degree of the atmospheric regional scale model (REMO used in the Baltic Sea Experiment (BALTEX. For a 10 year period, the VIC-2L model is forced in daily time steps with measured daily means of precipitation, air temperature, pressure, wind speed, air humidity and daily sunshine duration. VIC-2L model output of surface runoff and baseflow is used as input for the routing model, which transforms modelled runoff into streamflow, which is compared to measured streamflow at selected gauge stations. The water balance of the basin is investigated and the model results on daily, monthly and annual time scales are discussed. Discrepancies appear in time periods where snow and ice processes are important. Extreme flood events are analyzed in more dital. The influence of calibration with respect to runoff is examined.

  4. Modeling oxygen isotopes in the Pliocene: Large-scale features over the land and ocean

    Science.gov (United States)

    Tindall, Julia C.; Haywood, Alan M.

    2015-09-01

    The first isotope-enabled general circulation model (GCM) simulations of the Pliocene are used to discuss the interpretation of δ18O measurements for a warm climate. The model suggests that spatial patterns of Pliocene ocean surface δ18O (δ18Osw) were similar to those of the preindustrial period; however, Arctic and coastal regions were relatively depleted, while South Atlantic and Mediterranean regions were relatively enriched. Modeled δ18Osw anomalies are closely related to modeled salinity anomalies, which supports using δ18Osw as a paleosalinity proxy. Modeled Pliocene precipitation δ18O (δ18Op) was enriched relative to the preindustrial values (but with depletion of temperature anomalies; however, the relationship is neither linear nor spatially coincident: a large δ18Op signal does not always translate to a large temperature signal. These results suggest that isotope modeling can lead to enhanced synergy between climate models and climate proxy data. The model can relate proxy data to climate in a physically based way even when the relationship is complex and nonlocal. The δ18O-climate relationships, identified here from a GCM, could not be determined from transfer functions or simple models.

  5. On the assimilation of ice velocity and concentration data into large-scale sea ice models

    Directory of Open Access Journals (Sweden)

    V. Dulière

    2007-03-01

    Full Text Available Data assimilation into sea ice models designed for climate studies has started about 15 years ago. In most of the studies conducted so far, it is assumed that the improvement brought by the assimilation is straightforward. However, some studies suggest this might not be true. In order to elucidate this question and to find an appropriate way to further assimilate sea ice concentration and velocity observations into a global sea ice-ocean model, we analyze here results from a number of twin experiments (i.e. experiments in which the assimilated data are model outputs carried out with a simplified model of the Arctic sea ice pack. Our objective is to determine to what degree the assimilation of ice velocity and/or concentration data improves the global performance of the model and, more specifically, reduces the error in the computed ice thickness. A simple optimal interpolation scheme is used, and outputs from a control run and from perturbed experiments without and with data assimilation are thoroughly compared. Our results indicate that, under certain conditions depending on the assimilation weights and the type of model error, the assimilation of ice velocity data enhances the model performance. The assimilation of ice concentration data can also help in improving the model behavior, but it has to be handled with care because of the strong connection between ice concentration and ice thickness.

    This study is preliminary study towards real observation data assimilation into NEMOLIM, a global sea ice-ocean model.

  6. Open source large-scale high-resolution environmental modelling with GEMS

    Science.gov (United States)

    Baarsma, Rein; Alberti, Koko; Marra, Wouter; Karssenberg, Derek

    2016-04-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however, not to mention the computational demand of a model itself. We developed GEMS (Global Environmental Modelling System), an online application to run environmental models on various scales directly in your browser and share the results with other researchers. GEMS is open-source and uses open-source platforms including Flask, Leaflet, GDAL, MapServer and the PCRaster-Python modelling framework to process spatio-temporal models in real time. With GEMS, users can write, run, and visualize the results of dynamic PCRaster-Python models in a browser. GEMS uses freely available global data to feed the models, and automatically converts the data to the relevant model extent and data format. Currently available data includes the SRTM elevation model, a selection of monthly vegetation data from MODIS, land use classifications from GlobCover, historical climate data from WorldClim, HWSD soil information from WorldGrids, population density from SEDAC and near real-time weather forecasts, most with a ±100m resolution. Furthermore, users can add other or their own datasets using a web coverage service or a custom data provider script. With easy access to a wide range of base datasets and without the data preparation that is usually necessary to run environmental models, building and running a model becomes a matter hours. Furthermore, it is easy to share the resulting maps, timeseries data or model scenarios with other researchers through a web mapping service (WMS). GEMS can be used to provide open access to model results. Additionally, environmental models in GEMS can be employed by users with no extensive experience with writing code, which is for example valuable for using models

  7. Modelling of a large scale reactor for plasma deposition of silicon

    NARCIS (Netherlands)

    Nienhuis, G. J.; W. Goedheer,

    1999-01-01

    A 2D fluid model for RF discharges in a mixture of silane and hydrogen is applied to a cylindrically symmetric reactor with an electrode radius large compared to the electrode separation. In the model the electron kinetics are included by solving the two-term Boltzmann equation to obtain the electro

  8. Estimation of leaf area for large scale phenotyping and modeling of rose genotypes

    NARCIS (Netherlands)

    Gao, M.; Heijden, van der G.W.A.M.; Vos, J.; Eveleens, B.A.; Marcelis, L.F.M.

    2012-01-01

    Leaf area is a major parameter in many physiological and plant modeling studies. When we want to use physiological models in plant breeding, we need to measure the leaf area for a large number of genotypes. This requires a fast and non-destructive method. In this study, we investigated whether for c

  9. Large-scale parameter extraction in electrocardiology models through Born approximation

    Science.gov (United States)

    He, Yuan; Keyes, David E.

    2013-01-01

    One of the main objectives in electrocardiology is to extract physical properties of cardiac tissues from measured information on electrical activity of the heart. Mathematically, this is an inverse problem for reconstructing coefficients in electrocardiology models from partial knowledge of the solutions of the models. In this work, we consider such parameter extraction problems for two well-studied electrocardiology models: the bidomain model and the FitzHugh-Nagumo model. We propose a systematic reconstruction method based on the Born approximation of the original nonlinear inverse problem. We describe a two-step procedure that allows us to reconstruct not only perturbations of the unknowns, but also the backgrounds around which the linearization is performed. We show some numerical simulations under various conditions to demonstrate the performance of our method. We also introduce a parameterization strategy using eigenfunctions of the Laplacian operator to reduce the number of unknowns in the parameter extraction problem.

  10. Large-scale parameter extraction in electrocardiology models through Born approximation

    KAUST Repository

    He, Yuan

    2012-12-04

    One of the main objectives in electrocardiology is to extract physical properties of cardiac tissues from measured information on electrical activity of the heart. Mathematically, this is an inverse problem for reconstructing coefficients in electrocardiology models from partial knowledge of the solutions of the models. In this work, we consider such parameter extraction problems for two well-studied electrocardiology models: the bidomain model and the FitzHugh-Nagumo model. We propose a systematic reconstruction method based on the Born approximation of the original nonlinear inverse problem. We describe a two-step procedure that allows us to reconstruct not only perturbations of the unknowns, but also the backgrounds around which the linearization is performed. We show some numerical simulations under various conditions to demonstrate the performance of our method. We also introduce a parameterization strategy using eigenfunctions of the Laplacian operator to reduce the number of unknowns in the parameter extraction problem. © 2013 IOP Publishing Ltd.

  11. Exploring large-scale phenomena in composite membranes through an efficient implicit-solvent model

    Science.gov (United States)

    Laradji, Mohamed; Kumar, P. B. Sunil; Spangler, Eric J.

    2016-07-01

    Several microscopic and mesoscale models have been introduced in the past to investigate various phenomena in lipid membranes. Most of these models account for the solvent explicitly. Since in a typical molecular dynamics simulation, the majority of particles belong to the solvent, much of the computational effort in these simulations is devoted for calculating forces between solvent particles. To overcome this problem, several implicit-solvent mesoscale models for lipid membranes have been proposed during the last few years. In the present article, we review an efficient coarse-grained implicit-solvent model we introduced earlier for studies of lipid membranes. In this model, lipid molecules are coarse-grained into short semi-flexible chains of beads with soft interactions. Through molecular dynamics simulations, the model is used to investigate the thermal, structural and elastic properties of lipid membranes. We will also review here few studies, based on this model, of the phase behavior of nanoscale liposomes, cytoskeleton-induced blebbing in lipid membranes, as well as nanoparticles wrapping and endocytosis by tensionless lipid membranes. Topical Review article submitted to the Journal of Physics D: Applied Physics, May 9, 2016

  12. Large-scale hydrologic and hydrodynamic modeling of the Amazon River basin

    Science.gov (United States)

    de Paiva, Rodrigo Cauduro Dias; Buarque, Diogo Costa; Collischonn, Walter; Bonnet, Marie-Paule; Frappart, Frédéric; Calmant, Stephane; Bulhões Mendes, Carlos André

    2013-03-01

    In this paper, a hydrologic/hydrodynamic modeling of the Amazon River basin is presented using the MGB-IPH model with a validation using remotely sensed observations. Moreover, the sources of model errors by means of the validation and sensitivity tests are investigated, and the physical functioning of the Amazon basin is also explored. The MGB-IPH is a physically based model resolving all land hydrological processes and here using a full 1-D river hydrodynamic module with a simple floodplain storage model. River-floodplain geometry parameters were extracted from the SRTM digital elevation model, and the model was forced using satellite-derived rainfall from TRMM3B42. Model results agree with observed in situ daily river discharges and water levels and with three complementary satellite-based products: (1) water levels derived from ENVISAT altimetry data; (2) a global data set of monthly inundation extent; and (3) monthly terrestrial water storage (TWS) anomalies derived from the Gravity Recovery and Climate Experimental mission. However, the model is sensitive to precipitation forcing and river-floodplain parameters. Most of the errors occur in westerly regions, possibly due to the poor quality of TRMM 3B42 rainfall data set in these mountainous and/or poorly monitored areas. In addition, uncertainty in river-floodplain geometry causes errors in simulated water levels and inundation extent, suggesting the need for improvement of parameter estimation methods. Finally, analyses of Amazon hydrological processes demonstrate that surface waters govern most of the Amazon TWS changes (56%), followed by soil water (27%) and ground water (8%). Moreover, floodplains play a major role in stream flow routing, although backwater effects are also important to delay and attenuate flood waves.

  13. Large-scale neural model validation of partial correlation analysis for effective connectivity investigation in functional MRI.

    Science.gov (United States)

    Marrelec, G; Kim, J; Doyon, J; Horwitz, B

    2009-03-01

    Recent studies of functional connectivity based upon blood oxygen level dependent functional magnetic resonance imaging have shown that this technique allows one to investigate large-scale functional brain networks. In a previous study, we advocated that data-driven measures of effective connectivity should be developed to bridge the gap between functional and effective connectivity. To attain this goal, we proposed a novel approach based on the partial correlation matrix. In this study, we further validate the use of partial correlation analysis by employing a large-scale, neurobiologically realistic neural network model to generate simulated data that we analyze with both structural equation modeling (SEM) and the partial correlation approach. Unlike real experimental data, where the interregional anatomical links are not necessarily known, the links between the nodes of the network model are fully specified, and thus provide a standard against which to judge the results of SEM and partial correlation analyses. Our results show that partial correlation analysis from the data alone exhibits patterns of effective connectivity that are similar to those found using SEM, and both are in agreement with respect to the underlying neuroarchitecture. Our findings thus provide a strong validation for the partial correlation method.

  14. Advances of Model Order Reduction Research in Large-scale System Simulation

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Model Order Reduction (MOR) plays more and more imp or tant role in complex system simulation, design and control recently. For example , for the large-size space structures, VLSI and MEMS (Micro-ElectroMechanical Systems) etc., in order to shorten the development cost, increase the system co ntrolling accuracy and reduce the complexity of controllers, the reduced order model must be constructed. Even in Virtual Reality (VR), the simulation and d isplay must be in real-time, the model order must be red...

  15. Large-scale Modeling of the Greenland Ice Sheet on Long Timescales

    DEFF Research Database (Denmark)

    Solgaard, Anne Munck

    the steady-state response of the Greenland ice sheet to a warmer climate. The threshold of irreversible decay was found to lie between a temperature increase of 4-5 K relative to present day when basal sliding was neglected in the ice-sheet model. Introducing basal sliding into the ice-sheet model shifted...... and climate model is included shows, however, that a Föhn effect is activated and hereby increasing temperatures inland and inhibiting further ice-sheet expansion into the interior. This indicates that colder than present temperatures are needed in order for the ice sheet to regrow to the current geometry...

  16. A stringent restriction from the growth of large-scale structure on apparent acceleration in inhomogeneous cosmological models

    CERN Document Server

    Ishak, Mustapha; Troxel, M A

    2013-01-01

    Probes of cosmic expansion constitute the main basis for arguments to support or refute a possible apparent acceleration due to uneven dynamics in the universe as described by inhomogeneous cosmological models. We present in this Letter a separate argument based on results from the study of the growth rate of large-scale structure in the universe as modeled by the Szekeres inhomogeneous cosmological models. We use the models in all generality with no assumptions of spherical or axial symmetries. We find that Szekeres inhomogeneous models that fit well the observed expansion history fail to explain the observed late-time suppression of the growth of structure unless a cosmological constant is added to the dynamics.

  17. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  18. Incremental learning of Bayesian sensorimotor models: from low-level behaviours to large-scale structure of the environment

    Science.gov (United States)

    Diard, Julien; Gilet, Estelle; Simonin, Éva; Bessière, Pierre

    2010-12-01

    This paper concerns the incremental learning of hierarchies of representations of space in artificial or natural cognitive systems. We propose a mathematical formalism for defining space representations (Bayesian Maps) and modelling their interaction in hierarchies of representations (sensorimotor interaction operator). We illustrate our formalism with a robotic experiment. Starting from a model based on the proximity to obstacles, we learn a new one related to the direction of the light source. It provides new behaviours, like phototaxis and photophobia. We then combine these two maps so as to identify parts of the environment where the way the two modalities interact is recognisable. This classification is a basis for learning a higher level of abstraction map that describes the large-scale structure of the environment. In the final model, the perception-action cycle is modelled by a hierarchy of sensorimotor models of increasing time and space scales, which provide navigation strategies of increasing complexities.

  19. A GIS Based Variable Source Area Model for Large-scale Basin Hydrology

    Directory of Open Access Journals (Sweden)

    Rajesh Vijaykumar Kherde

    2014-05-01

    Full Text Available A geographic information system-based rainfall runoff model that simulate variable source area runoff using topographic features of the basin is presented. The model simulate the flow processes on daily time step basis and has four non linear stores viz. Interception store, soil moisture store, channel store and ground water store. Source area fraction is modelled as a function of antecedent soil moisture, net rainfall and pore capacity raised to the power of areal average topographic index (. Source area fraction is used in conjuction with topographic index to develop linear relations for runoff, Infiltration and interflow. An exponential relation is developed for lower zone evapotranspiration and non-linear exponential relations to model macropore flow and base flow are proposed.

  20. The Sum of the Parts: Large-scale Modeling in Systems Biology

    DEFF Research Database (Denmark)

    Fridolin, Gross; Green, Sara

    2017-01-01

    these questions, we distinguish between two types of reductionism, namely 'modular reductionism' and 'bottom-up reductionism'. Much knowledge in molecular biology has been gained by decomposing living systems into functional modules or through detailed studies of molecular processes. We ask whether systems...... biology provides novel ways to ​ ​recompose these findings in the context of the system as a whole via computational simulations. As an example of computational integration of modules, we analyze the first whole-cell model of the bacterium ​ ​M. genitalium. Secondly, we examine the attempt to recompose...... processes across different spatial scales via multi-scale cardiac models. Although these models also rely on a number of idealizations and simplifying assumptions, we argue that they provide insight into the limitations of reductionist approaches. Whole-cell models can be used to discover properties arising...

  1. A Single Nucleotide Resolution Model for Large-Scale Simulations of Double Stranded DNA

    CERN Document Server

    Fosado, Y A G; Allan, J; Brackley, C; Henrich, O; Marenduzzo, D

    2016-01-01

    The computational modelling of DNA is becoming crucial in light of new advances in DNA nanotechnology, single-molecule experiments and in vivo DNA tampering. Here we present a mesoscopic model for double stranded DNA (dsDNA) at the single nucleotide level which retains the characteristic helical structure, while being able to simulate large molecules -- up to a million base pairs -- for time-scales which are relevant to physiological processes. This is made possible by an efficient and highly-parallelised implementation of the model which we discuss here. We compare the behaviour of our model with single molecule experiments where dsDNA is manipulated by external forces or torques. We also present some results on the kinetics of denaturation of linear DNA.

  2. Topology of large-scale structure in seeded hot dark matter models

    Science.gov (United States)

    Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.

    1992-01-01

    The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.

  3. How large-scale energy-environment models represent technology and technological change

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-01-01

    In the process of selecting measures against global warming, it is important to consider the introduction of technological innovations into the models, and studies were made in this connection. An induced technical change model has to be an economically total model that represents various incentives involving the form of profits from innovations; profits from cost functions, research-and-development production functions, and abstract profits from empirical estimates; and the dimensions in which technological change is assumed to progress. Under study at the Stanford Energy Modeling Forum is how to represent various technological assumptions and development, which is necessary to predict the cost for dealing with global warming. At the conference of February 2001, 10 cases of preliminary model scenarios were discussed. In one case, for instance, a carbon tax of $25/ton in 2010 is raised $25 every decade to be $100/ton in 2040. Three working groups are engaged in the study of long-run economy/technology baseline scenarios, characterization of current and potential future technologies, and ways of modeling technological change. (NEDO)

  4. Mutual coupling of hydrologic and hydrodynamic models - a viable approach for improved large-scale inundation estimates?

    Science.gov (United States)

    Hoch, Jannis; Winsemius, Hessel; van Beek, Ludovicus; Haag, Arjen; Bierkens, Marc

    2016-04-01

    Due to their increasing occurrence rate and associated economic costs, fluvial floods are large-scale and cross-border phenomena that need to be well understood. Sound information about temporal and spatial variations of flood hazard is essential for adequate flood risk management and climate change adaption measures. While progress has been made in assessments of flood hazard and risk on the global scale, studies to date have made compromises between spatial resolution on the one hand and local detail that influences their temporal characteristics (rate of rise, duration) on the other. Moreover, global models cannot realistically model flood wave propagation due to a lack of detail in channel and floodplain geometry, and the representation of hydrologic processes influencing the surface water balance such as open water evaporation from inundated water and re-infiltration of water in river banks. To overcome these restrictions and to obtain a better understanding of flood propagation including its spatio-temporal variations at the large scale, yet at a sufficiently high resolution, the present study aims to develop a large-scale modeling tool by coupling the global hydrologic model PCR-GLOBWB and the recently developed hydrodynamic model DELFT3D-FM. The first computes surface water volumes which are routed by the latter, solving the full Saint-Venant equations. With DELFT3D FM being capable of representing the model domain as a flexible mesh, model accuracy is only improved at relevant locations (river and adjacent floodplain) and the computation time is not unnecessarily increased. This efficiency is very advantageous for large-scale modelling approaches. The model domain is thereby schematized by 2D floodplains, being derived from global data sets (HydroSHEDS and G3WBM, respectively). Since a previous study with 1way-coupling showed good model performance (J.M. Hoch et al., in prep.), this approach was extended to 2way-coupling to fully represent evaporation

  5. Large-Scale Modelling of the Environmentally-Driven Population Dynamics of Temperate Aedes albopictus (Skuse.

    Directory of Open Access Journals (Sweden)

    Kamil Erguler

    Full Text Available The Asian tiger mosquito, Aedes albopictus, is a highly invasive vector species. It is a proven vector of dengue and chikungunya viruses, with the potential to host a further 24 arboviruses. It has recently expanded its geographical range, threatening many countries in the Middle East, Mediterranean, Europe and North America. Here, we investigate the theoretical limitations of its range expansion by developing an environmentally-driven mathematical model of its population dynamics. We focus on the temperate strain of Ae. albopictus and compile a comprehensive literature-based database of physiological parameters. As a novel approach, we link its population dynamics to globally-available environmental datasets by performing inference on all parameters. We adopt a Bayesian approach using experimental data as prior knowledge and the surveillance dataset of Emilia-Romagna, Italy, as evidence. The model accounts for temperature, precipitation, human population density and photoperiod as the main environmental drivers, and, in addition, incorporates the mechanism of diapause and a simple breeding site model. The model demonstrates high predictive skill over the reference region and beyond, confirming most of the current reports of vector presence in Europe. One of the main hypotheses derived from the model is the survival of Ae. albopictus populations through harsh winter conditions. The model, constrained by the environmental datasets, requires that either diapausing eggs or adult vectors have increased cold resistance. The model also suggests that temperature and photoperiod control diapause initiation and termination differentially. We demonstrate that it is possible to account for unobserved properties and constraints, such as differences between laboratory and field conditions, to derive reliable inferences on the environmental dependence of Ae. albopictus populations.

  6. Large scale dynamics of the Persistent Turning Walker model of fish behavior

    CERN Document Server

    Degond, Pierre

    2007-01-01

    This paper considers a new model of individual displacement, based on fish motion, the so-called Persistent Turning Walker (PTW) model, which involves an Ornstein-Uhlenbeck process on the curvature of the particle trajectory. The goal is to show that its large time and space scale dynamics is of diffusive type, and to provide an analytic expression of the diffusion coefficient. Two methods are investigated. In the first one, we compute the large time asymptotics of the variance of the individual stochastic trajectories. The second method is based on a diffusion approximation of the kinetic formulation of these stochastic trajectories. The kinetic model is a Fokker-Planck type equation posed in an extended phase-space involving the curvature among the kinetic variables. We show that both methods lead to the same value of the diffusion constant. We present some numerical simulations to illustrate the theoretical results.

  7. Augmenting a Large-Scale Hydrology Model to Reproduce Groundwater Variability

    Science.gov (United States)

    Stampoulis, D.; Reager, J. T., II; Andreadis, K.; Famiglietti, J. S.

    2016-12-01

    To understand the influence of groundwater on terrestrial ecosystems and society, global assessment of groundwater temporal fluctuations is required. A water table was initialized in the Variable Infiltration Capacity (VIC) hydrologic model in a semi-realistic approach to account for groundwater variability. Global water table depth data derived from observations at nearly 2 million well sites compiled from government archives and published literature, as well as groundwater model simulations, were used to create a new soil layer of varying depth for each model grid cell. The new 4-layer version of VIC, hereafter named VIC-4L, was run with and without assimilating NASA's Gravity Recovery and Climate Experiment (GRACE) observations. The results were compared with simulations using the original VIC version (named VIC-3L) with GRACE assimilation, while all runs were compared with well data.

  8. The flow structure of pyroclastic density currents: evidence from particle models and large-scale experiments

    Science.gov (United States)

    Dellino, Pierfrancesco; Büttner, Ralf; Dioguardi, Fabio; Doronzo, Domenico Maria; La Volpe, Luigi; Mele, Daniela; Sonder, Ingo; Sulpizio, Roberto; Zimanowski, Bernd

    2010-05-01

    Pyroclastic flows are ground hugging, hot, gas-particle flows. They represent the most hazardous events of explosive volcanism, one striking example being the famous historical eruption of Pompeii (AD 79) at Vesuvius. Much of our knowledge on the mechanics of pyroclastic flows comes from theoretical models and numerical simulations. Valuable data are also stored in the geological record of past eruptions, i.e. the particles contained in pyroclastic deposits, but they are rarely used for quantifying the destructive potential of pyroclastic flows. In this paper, by means of experiments, we validate a model that is based on data from pyroclastic deposits. It allows the reconstruction of the current's fluid-dynamic behaviour. We show that our model results in likely values of dynamic pressure and particle volumetric concentration, and allows quantifying the hazard potential of pyroclastic flows.

  9. Norway's 2011 Terror Attacks: Alleviating National Trauma With a Large-Scale Proactive Intervention Model.

    Science.gov (United States)

    Kärki, Freja Ulvestad

    2015-09-01

    After the terror attacks of July 22, 2011, Norwegian health authorities piloted a new model for municipality-based psychosocial follow-up with victims. This column describes the development of a comprehensive follow-up intervention by health authorities and others that has been implemented at the municipality level across Norway. The model's principles emphasize proactivity by service providers; individually tailored help, with each victim being assigned a contact person in the residential municipality; continuity and long-term focus; effective intersectorial collaboration; and standardized screening of symptoms during the first year. Weekend reunions were also organized for the bereaved, and one-day reunions were organized for the survivors and their families at intervals over the first 18 months. Preliminary findings indicate a high level of success in model implementation. However, the overall effect of the interventions will be a subject for future evaluations.

  10. A Large-Scale Multibody Manipulator Soft Sensor Model and Experiment Validation

    Directory of Open Access Journals (Sweden)

    Wu Ren

    2014-01-01

    Full Text Available Stress signal is difficult to obtain in the health monitoring of multibody manipulator. In order to solve this problem, a soft sensor method is presented. In the method, stress signal is considered as dominant variable and angle signal is regarded as auxiliary variable. By establishing the mathematical relationship between them, a soft sensor model is proposed. In the model, the stress information can be deduced by angle information which can be easily measured for such structures by experiments. Finally, test of ground and wall working conditions is done on a multibody manipulator test rig. The results show that the stress calculated by the proposed method is closed to the test one. Thus, the stress signal is easier to get than the traditional method. All of these prove that the model is correct and the method is feasible.

  11. Segmented linear modeling of CHO fed‐batch culture and its application to large scale production

    Science.gov (United States)

    Ben Yahia, Bassem; Gourevitch, Boris; Malphettes, Laetitia

    2016-01-01

    ABSTRACT We describe a systematic approach to model CHO metabolism during biopharmaceutical production across a wide range of cell culture conditions. To this end, we applied the metabolic steady state concept. We analyzed and modeled the production rates of metabolites as a function of the specific growth rate. First, the total number of metabolic steady state phases and the location of the breakpoints were determined by recursive partitioning. For this, the smoothed derivative of the metabolic rates with respect to the growth rate were used followed by hierarchical clustering of the obtained partition. We then applied a piecewise regression to the metabolic rates with the previously determined number of phases. This allowed identifying the growth rates at which the cells underwent a metabolic shift. The resulting model with piecewise linear relationships between metabolic rates and the growth rate did well describe cellular metabolism in the fed‐batch cultures. Using the model structure and parameter values from a small‐scale cell culture (2 L) training dataset, it was possible to predict metabolic rates of new fed‐batch cultures just using the experimental specific growth rates. Such prediction was successful both at the laboratory scale with 2 L bioreactors but also at the production scale of 2000 L. This type of modeling provides a flexible framework to set a solid foundation for metabolic flux analysis and mechanistic type of modeling. Biotechnol. Bioeng. 2017;114: 785–797. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:27869296

  12. Segmented linear modeling of CHO fed-batch culture and its application to large scale production.

    Science.gov (United States)

    Ben Yahia, Bassem; Gourevitch, Boris; Malphettes, Laetitia; Heinzle, Elmar

    2017-04-01

    We describe a systematic approach to model CHO metabolism during biopharmaceutical production across a wide range of cell culture conditions. To this end, we applied the metabolic steady state concept. We analyzed and modeled the production rates of metabolites as a function of the specific growth rate. First, the total number of metabolic steady state phases and the location of the breakpoints were determined by recursive partitioning. For this, the smoothed derivative of the metabolic rates with respect to the growth rate were used followed by hierarchical clustering of the obtained partition. We then applied a piecewise regression to the metabolic rates with the previously determined number of phases. This allowed identifying the growth rates at which the cells underwent a metabolic shift. The resulting model with piecewise linear relationships between metabolic rates and the growth rate did well describe cellular metabolism in the fed-batch cultures. Using the model structure and parameter values from a small-scale cell culture (2 L) training dataset, it was possible to predict metabolic rates of new fed-batch cultures just using the experimental specific growth rates. Such prediction was successful both at the laboratory scale with 2 L bioreactors but also at the production scale of 2000 L. This type of modeling provides a flexible framework to set a solid foundation for metabolic flux analysis and mechanistic type of modeling. Biotechnol. Bioeng. 2017;114: 785-797. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.

  13. The Sum of the Parts: Large-scale Modeling in Systems Biology

    DEFF Research Database (Denmark)

    Fridolin, Gross; Green, Sara

    2017-01-01

    Systems biologists often distance themselves from reductionist approaches and formulate their aim as understanding living systems “as a whole”. Yet, it is often unclear what kind of reductionism they have in mind, and in what sense their methodologies offer a more comprehensive approach. To address...... at the interface of dynamically coupled processes within a biological system, thereby making more apparent what is lost through decomposition. Similarly, multi-scale modeling highlights the importance of macroscale parameters and models and challenges the view that living systems can be understood “bottom...

  14. Modelling and operation strategies of DLR's large scale thermocline test facility (TESIS)

    Science.gov (United States)

    Odenthal, Christian; Breidenbach, Nils; Bauer, Thomas

    2017-06-01

    In this work an overview of the TESIS:store thermocline test facility and its current construction status will be given. Based on this, the TESIS:store facility using sensible solid filler material is modelled with a fully transient model, implemented in MATLAB®. Results in terms of the impact of filler site and operation strategies will be presented. While low porosity and small particle diameters for the filler material are beneficial, operation strategy is one key element with potential for optimization. It is shown that plant operators have to ponder between utilization and exergetic efficiency. Different durations of the charging and discharging period enable further potential for optimizations.

  15. Large scale 2D numerical modelling of reservoirs sedimentation and flushing operations

    OpenAIRE

    Dewals, Benjamin; Erpicum, Sébastien; Archambeau, Pierre; Detrembleur, Sylvain; Fraikin, Catherine; Pirotton, Michel

    2004-01-01

    The quasi-3D flow solver WOLF has been developed at the University of Liege for almost a decade. It has been used to carry out the simulation of silting processes in large reservoirs and to predict the efficiency of flushing operations. Besides briefly depicting the mathematical and numerical model, the present paper demonstrates its applicability on the case of a large hydropower project in India. The silting process of the reservoir has been simulated by means of the quasi-3D flow model wit...

  16. Large Scale Tissue Morphogenesis Simulation on Heterogenous Systems Based on a Flexible Biomechanical Cell Model.

    Science.gov (United States)

    Jeannin-Girardon, Anne; Ballet, Pascal; Rodin, Vincent

    2015-01-01

    The complexity of biological tissue morphogenesis makes in silico simulations of such system very interesting in order to gain a better understanding of the underlying mechanisms ruling the development of multicellular tissues. This complexity is mainly due to two elements: firstly, biological tissues comprise a large amount of cells; secondly, these cells exhibit complex interactions and behaviors. To address these two issues, we propose two tools: the first one is a virtual cell model that comprise two main elements: firstly, a mechanical structure (membrane, cytoskeleton, and cortex) and secondly, the main behaviors exhibited by biological cells, i.e., mitosis, growth, differentiation, molecule consumption, and production as well as the consideration of the physical constraints issued from the environment. An artificial chemistry is also included in the model. This virtual cell model is coupled to an agent-based formalism. The second tool is a simulator that relies on the OpenCL framework. It allows efficient parallel simulations on heterogenous devices such as micro-processors or graphics processors. We present two case studies validating the implementation of our model in our simulator: cellular proliferation controlled by cell signalling and limb growth in a virtual organism.

  17. fast_protein_cluster: parallel and optimized clustering of large-scale protein modeling data.

    Science.gov (United States)

    Hung, Ling-Hong; Samudrala, Ram

    2014-06-15

    fast_protein_cluster is a fast, parallel and memory efficient package used to cluster 60 000 sets of protein models (with up to 550 000 models per set) generated by the Nutritious Rice for the World project. fast_protein_cluster is an optimized and extensible toolkit that supports Root Mean Square Deviation after optimal superposition (RMSD) and Template Modeling score (TM-score) as metrics. RMSD calculations using a laptop CPU are 60× faster than qcprot and 3× faster than current graphics processing unit (GPU) implementations. New GPU code further increases the speed of RMSD and TM-score calculations. fast_protein_cluster provides novel k-means and hierarchical clustering methods that are up to 250× and 2000× faster, respectively, than Clusco, and identify significantly more accurate models than Spicker and Clusco. fast_protein_cluster is written in C++ using OpenMP for multi-threading support. Custom streaming Single Instruction Multiple Data (SIMD) extensions and advanced vector extension intrinsics code accelerate CPU calculations, and OpenCL kernels support AMD and Nvidia GPUs. fast_protein_cluster is available under the M.I.T. license. (http://software.compbio.washington.edu/fast_protein_cluster) © The Author 2014. Published by Oxford University Press.

  18. Large-scale shell-model study of the Sn isotopes

    Directory of Open Access Journals (Sweden)

    Osnes Eivind

    2015-01-01

    Full Text Available We summarize the results of an extensive study of the structure of the Sn isotopes using a large shell-model space and effective interactions evaluated from realistic two-nucleon potentials. For a fuller account, see ref. [1].

  19. A balanced water layer concept for subglacial hydrology in large-scale ice sheet models

    Directory of Open Access Journals (Sweden)

    S. Goeller

    2013-07-01

    Full Text Available There is currently no doubt about the existence of a widespread hydrological network under the Antarctic Ice Sheet, which lubricates the ice base and thus leads to increased ice velocities. Consequently, ice models should incorporate basal hydrology to obtain meaningful results for future ice dynamics and their contribution to global sea level rise. Here, we introduce the balanced water layer concept, covering two prominent subglacial hydrological features for ice sheet modeling on a continental scale: the evolution of subglacial lakes and balance water fluxes. We couple it to the thermomechanical ice-flow model RIMBAY and apply it to a synthetic model domain. In our experiments we demonstrate the dynamic generation of subglacial lakes and their impact on the velocity field of the overlaying ice sheet, resulting in a negative ice mass balance. Furthermore, we introduce an elementary parametrization of the water flux–basal sliding coupling and reveal the predominance of the ice loss through the resulting ice streams against the stabilizing influence of less hydrologically active areas. We point out that established balance flux schemes quantify these effects only partially as their ability to store subglacial water is lacking.

  20. Breach modelling by overflow with TELEMAC 2D: Comparison with large-scale experiments

    Science.gov (United States)

    An erosion law has been implemented in TELEMAC 2D to represent the surface erosion process to model the breach formation of a levee. We focus on homogeneous and earth fill levee to simplify this first implementation. The first part of this study reveals the ability of this method to represent simu...

  1. A balanced water layer concept for subglacial hydrology in large scale ice sheet models

    Science.gov (United States)

    Goeller, S.; Thoma, M.; Grosfeld, K.; Miller, H.

    2012-12-01

    There is currently no doubt about the existence of a wide-spread hydrological network under the Antarctic ice sheet, which lubricates the ice base and thus leads to increased ice velocities. Consequently, ice models should incorporate basal hydrology to obtain meaningful results for future ice dynamics and their contribution to global sea level rise. Here, we introduce the balanced water layer concept, covering two prominent subglacial hydrological features for ice sheet modeling on a continental scale: the evolution of subglacial lakes and balance water fluxes. We couple it to the thermomechanical ice-flow model RIMBAY and apply it to a synthetic model domain inspired by the Gamburtsev Mountains, Antarctica. In our experiments we demonstrate the dynamic generation of subglacial lakes and their impact on the velocity field of the overlaying ice sheet, resulting in a negative ice mass balance. Furthermore, we introduce an elementary parametrization of the water flux-basal sliding coupling and reveal the predominance of the ice loss through the resulting ice streams against the stabilizing influence of less hydrologically active areas. We point out, that established balance flux schemes quantify these effects only partially as their ability to store subglacial water is lacking.

  2. Open source large-scale high-resolution environmental modelling with GEMS

    NARCIS (Netherlands)

    Baarsma, R.J.; Alberti, K.; Marra, W.A.; Karssenberg, D.J.

    2016-01-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however,

  3. Open source large-scale high-resolution environmental modelling with GEMS

    NARCIS (Netherlands)

    Baarsma, R.J.; Alberti, K.; Marra, W.A.; Karssenberg, D.J.

    2016-01-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however,

  4. Mathematical modelling of stability of closing slopes in large-scale surface coal mines

    Energy Technology Data Exchange (ETDEWEB)

    Kloss, K. (Stavebni Geologie, Prague (Czechoslovakia))

    1990-05-01

    Describes methods of modelling stability of slopes of the Krusne Hory mountains in North Bohemian brown coal mines using the finite element method and a large IBM computer, with output on a Digigraph plotter. Briefly discusses results for the Merkur, Jansky and Jiretin mines, illustrating their geological profiles with diagrams of finite element networks. 4 refs.

  5. Modeling resilience, friability, and cost of an airport affected by the large-scale disruptive event

    NARCIS (Netherlands)

    Janic, M.

    2013-01-01

    This paper deals with modeling resilience, friability, and cost of an airport affected by the largescale disruptive event. These events affecting the airport's operations individually or in combination can be bad weather, failures of particular crucial aiiport and ATC (Air Traffic Control) component

  6. A large-scale multi-species spatial depletion model for overwintering waterfowl

    NARCIS (Netherlands)

    Baveco, J.M.; Kuipers, H.; Nolet, B.A.

    2011-01-01

    In this paper, we develop a model to evaluate the capacity of accommodation areas for overwintering waterfowl, at a large spatial scale. Each day geese are distributed over roosting sites. Based on the energy minimization principle, the birds daily decide which surrounding fields to exploit within t

  7. Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets

    Science.gov (United States)

    Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad

    2017-01-01

    Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…

  8. Toward an Aspirational Learning Model Gleaned from Large-Scale Assessment

    Science.gov (United States)

    Diket, Read M.; Xu, Lihua; Brewer, Thomas M.

    2014-01-01

    The aspirational model resulted from the authors' secondary analysis of the Mother/Child (M/C) test block from the 2008 National Assessment of Educational Progress restricted data that examined the responses of the national sample of 8th-grade students (n = 1648). This test block presented no artmaking task and consisted of the same 13 questions…

  9. A large-scale multi-species spatial depletion model for overwintering waterfowl

    NARCIS (Netherlands)

    Baveco, J.M.; Kuipers, H.; Nolet, B.A.

    2011-01-01

    In this paper, we develop a model to evaluate the capacity of accommodation areas for overwintering waterfowl, at a large spatial scale. Each day geese are distributed over roosting sites. Based on the energy minimization principle, the birds daily decide which surrounding fields to exploit within

  10. Contribution Of The SWOT Mission To Large-Scale Hydrological Modeling Using Data Assimilation

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Rochoux, M. C.; Garambois, P. A.; Paris, A.; Calmant, S.

    2016-12-01

    The purpose of this work is to improve water fluxes estimation on the continental surfaces, at interanual and interseasonal scale (from few years to decennial time period). More specifically, it studies contribution of the incoming SWOT satellite mission to improve hydrology model at global scale, and using the land surface model ISBA-TRIP. This model corresponds to the continental component of the CNRM (French meteorological research center)'s climatic model. This study explores the potential of satellite data to correct either input parameters of the river routing scheme TRIP or its state variables. To do so, a data assimilation platform (using an Ensemble Kalman Filter, EnKF) has been implemented to assimilate SWOT virtual observations as well as discharges estimated from real nadir altimetry data. A series of twin experiments is used to test and validate the parameter estimation module of the platform. SWOT virtual-observations of water heights along SWOT tracks (with a 10 cm white noise model error) are assimilated to correct the river routing model parameters. To begin with, we chose to focus exclusively on the river manning coefficient, with the possibility to easily extend to other parameters such as the river widths. First results show that the platform is able to recover the "true" Manning distribution assimilating SWOT-like water heights. The error on the coefficients goes from 35 % before assimilation to 9 % after four SWOT orbit repeat period of 21 days. In the state estimation mode, daily assimilation cycles are realized to correct TRIP river water storage initial state by assimilating ENVISAT-based discharge. Those observations are derived from ENVISAT water elevation measures, using rating curves from the MGB-IPH hydrological model (calibrated over the Amazon using in situ gages discharge). Using such kind of observation allows going beyond idealized twin experiments and also to test contribution of a remotely-sensed discharge product, which could

  11. Constructing Model of Relationship among Behaviors and Injuries to Products Based on Large Scale Text Data on Injuries

    Science.gov (United States)

    Nomori, Koji; Kitamura, Koji; Motomura, Yoichi; Nishida, Yoshifumi; Yamanaka, Tatsuhiro; Komatsubara, Akinori

    In Japan, childhood injury prevention is urgent issue. Safety measures through creating knowledge of injury data are essential for preventing childhood injuries. Especially the injury prevention approach by product modification is very important. The risk assessment is one of the most fundamental methods to design safety products. The conventional risk assessment has been carried out subjectively because product makers have poor data on injuries. This paper deals with evidence-based risk assessment, in which artificial intelligence technologies are strongly needed. This paper describes a new method of foreseeing usage of products, which is the first step of the evidence-based risk assessment, and presents a retrieval system of injury data. The system enables a product designer to foresee how children use a product and which types of injuries occur due to the product in daily environment. The developed system consists of large scale injury data, text mining technology and probabilistic modeling technology. Large scale text data on childhood injuries was collected from medical institutions by an injury surveillance system. Types of behaviors to a product were derived from the injury text data using text mining technology. The relationship among products, types of behaviors, types of injuries and characteristics of children was modeled by Bayesian Network. The fundamental functions of the developed system and examples of new findings obtained by the system are reported in this paper.

  12. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    Science.gov (United States)

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  13. HYPERstream: a multi-scale framework for streamflow routing in large-scale hydrological model

    Science.gov (United States)

    Piccolroaz, Sebastiano; Di Lazzaro, Michele; Zarlenga, Antonio; Majone, Bruno; Bellin, Alberto; Fiori, Aldo

    2016-05-01

    We present HYPERstream, an innovative streamflow routing scheme based on the width function instantaneous unit hydrograph (WFIUH) theory, which is specifically designed to facilitate coupling with weather forecasting and climate models. The proposed routing scheme preserves geomorphological dispersion of the river network when dealing with horizontal hydrological fluxes, irrespective of the computational grid size inherited from the overlaying climate model providing the meteorological forcing. This is achieved by simulating routing within the river network through suitable transfer functions obtained by applying the WFIUH theory to the desired level of detail. The underlying principle is similar to the block-effective dispersion employed in groundwater hydrology, with the transfer functions used to represent the effect on streamflow of morphological heterogeneity at scales smaller than the computational grid. Transfer functions are constructed for each grid cell with respect to the nodes of the network where streamflow is simulated, by taking advantage of the detailed morphological information contained in the digital elevation model (DEM) of the zone of interest. These characteristics make HYPERstream well suited for multi-scale applications, ranging from catchment up to continental scale, and to investigate extreme events (e.g., floods) that require an accurate description of routing through the river network. The routing scheme enjoys parsimony in the adopted parametrization and computational efficiency, leading to a dramatic reduction of the computational effort with respect to full-gridded models at comparable level of accuracy. HYPERstream is designed with a simple and flexible modular structure that allows for the selection of any rainfall-runoff model to be coupled with the routing scheme and the choice of different hillslope processes to be represented, and it makes the framework particularly suitable to massive parallelization, customization according to

  14. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  15. A Nonlinear Multiobjective Bilevel Model for Minimum Cost Network Flow Problem in a Large-Scale Construction Project

    Directory of Open Access Journals (Sweden)

    Jiuping Xu

    2012-01-01

    Full Text Available The aim of this study is to deal with a minimum cost network flow problem (MCNFP in a large-scale construction project using a nonlinear multiobjective bilevel model with birandom variables. The main target of the upper level is to minimize both direct and transportation time costs. The target of the lower level is to minimize transportation costs. After an analysis of the birandom variables, an expectation multiobjective bilevel programming model with chance constraints is formulated to incorporate decision makers’ preferences. To solve the identified special conditions, an equivalent crisp model is proposed with an additional multiobjective bilevel particle swarm optimization (MOBLPSO developed to solve the model. The Shuibuya Hydropower Project is used as a real-world example to verify the proposed approach. Results and analysis are presented to highlight the performances of the MOBLPSO, which is very effective and efficient compared to a genetic algorithm and a simulated annealing algorithm.

  16. Large-scale hydrological modelling in the semi-arid north-east of Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Guentner, A.

    2002-09-01

    Semi-arid areas are characterized by small water resources. An increasing water demand due to population growth and economic development as well as a possible decreasing water availability in the course of climate change may aggravate water scarcity in future in these areas. The quantitative assessment of the water resources is a prerequisite for the development of sustainable measures of water management. For this task, hydrological models within a dynamic integrated framework are indispensable tools. The main objective of this study is to develop a hydrological model for the quantification of water availability over a large geographic domain of semi-arid environments. The study area is the Federal State of Ceara in the semi-arid north-east of Brazil. Surface water from reservoirs provides the largest part of water supply. The area has recurrently been affected by droughts which caused serious economic losses and social impacts like migration from the rural regions. (orig.)

  17. The replication domain model: regulating replicon firing in the context of large-scale chromosome architecture.

    Science.gov (United States)

    Pope, Benjamin D; Gilbert, David M

    2013-11-29

    The "Replicon Theory" of Jacob, Brenner, and Cuzin has reliably served as the paradigm for regulating the sites where individual replicons initiate replication. Concurrent with the replicon model was Taylor's demonstration that plant and animal chromosomes replicate segmentally in a defined temporal sequence, via cytologically defined units too large to be accounted for by a single replicon. Instead, there seemed to be a program to choreograph when chromosome units replicate during S phase, executed by initiation at clusters of individual replicons within each segment. Here, we summarize recent molecular evidence for the existence of such units, now known as "replication domains", and discuss how the organization of large chromosomes into structural units has added additional layers of regulation to the original replicon model.

  18. Large-scale modeling provides insights into Arabidopsis's acclimation to changing light and temperature conditions.

    Science.gov (United States)

    Töpfer, Nadine; Niokoloski, Zoran

    2013-09-01

    Classical flux balance analysis predicts steady-state flux distributions that maximize a given objective function. A recent study, Schuetz et al., (1) demonstrated that competing objectives constrain the metabolic fluxes in E. coli. For plants, with multiple cell types, fulfilling different functions, the objectives remain elusive and, therefore, hinder the prediction of actual fluxes, particularly for changing environments. In our study, we presented a novel approach to predict flux capacities for a large collection of metabolic pathways under eight different temperature and light conditions. (2) By integrating time-series transcriptomics data to constrain the flux boundaries of the metabolic model, we captured the time- and condition-specific state of the network. Although based on a single time-series experiment, the comparison of these capacities to a novel null model for transcript distribution allowed us to define a measure for differential behavior that accounts for the underlying network structure and the complex interplay of metabolic pathways.

  19. Interacting dark energy models in Cosmology and large-scale structure observational tests

    CERN Document Server

    Marcondes, Rafael J F

    2016-01-01

    Modern Cosmology offers us a great understanding of the universe with striking precision, made possible by the modern technologies of the newest generations of telescopes. The standard cosmological model, however, is not absent of theoretical problems and open questions. One possibility that has been put forward is the existence of a coupling between dark sectors. The idea of an interaction between the dark components could help physicists understand why we live in an epoch of the universe where dark matter and dark energy are comparable in terms of energy density, which can be regarded as a coincidence given that their time evolutions are completely different. We introduce the interaction phenomenologically and proceed to test models of interaction with observations of redshift-space distortions. In a flat universe composed only of those two fluids, we consider separately two forms of interaction, through terms proportional to the densities of both dark energy and dark matter. An analytic expression for the ...

  20. Modeling and Flocking Consensus Analysis for Large-Scale UAV Swarms

    Directory of Open Access Journals (Sweden)

    Li Bing

    2013-01-01

    Full Text Available Recently, distributed coordination control of the unmanned aerial vehicle (UAV swarms has been a particularly active topic in intelligent system field. In this paper, through understanding the emergent mechanism of the complex system, further research on the flocking and the dynamic characteristic of UAV swarms will be given. Firstly, this paper analyzes the current researches and existent problems of UAV swarms. Afterwards, by the theory of stochastic process and supplemented variables, a differential-integral model is established, converting the system model into Volterra integral equation. The existence and uniqueness of the solution of the system are discussed. Then the flocking control law is given based on artificial potential with system consensus. At last, we analyze the stability of the proposed flocking control algorithm based on the Lyapunov approach and prove that the system in a limited time can converge to the consensus direction of the velocity. Simulation results are provided to verify the conclusion.

  1. Model-Constrained Optimization Methods for Reduction of Parameterized Large-Scale Systems

    Science.gov (United States)

    2007-05-01

    colorful with his stereo karaoke system. Anh Hai, thanks for helping me move my furnitures many times, and for all the beers too! To all Vietnamese...visit them. My trips to Springfield would have been very boring if Anh Tung (+ Thao) and Anh Danh (+ Thuy) had not turn on their super stereo karaoke ...expensive to solve, e.g. for applications such as optimal design or probabilistic analyses. Model order reduction is a powerful tool that permits the

  2. Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta

    Science.gov (United States)

    Nienhuis, Jaap H.; Ashton, Andrew D.; Kettner, Albert J.; Giosan, Liviu

    2017-09-01

    The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal) dynamics or allogenic (external) forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.

  3. Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation

    Science.gov (United States)

    2013-06-01

    in VHDL and Verilog . Using the Synopsys Design Compiler and scripts provided by the OpenSPARC code base, we were able to generate gate level...efficiently used in a simulation model. This process is described in Figure 1. 3.2.1 Source. The OpenSPARC T2 design is provided in Verilog Register Transfer...one flat netlist. This file format is still completely valid Verilog code. The module is defined with connection arguments and the netlist of its

  4. Interacting dark energy models in Cosmology and large-scale structure observational tests

    OpenAIRE

    Rafael José França Marcondes

    2016-01-01

    Modern Cosmology offers us a great understanding of the universe with striking precision, made possible by the modern technologies of the newest generations of telescopes. The standard cosmological model, however, is not absent of theoretical problems and open questions. One possibility that has been put forward is the existence of a coupling between dark sectors. The idea of an interaction between the dark components could help physicists understand why we live in an epoch of the universe wh...

  5. Large-scale multimodal transport modelling. Part 2: Implementation and validation

    CSIR Research Space (South Africa)

    Van Heerden, Q

    2013-07-01

    Full Text Available activity chains across a 24-hour period. That is, we do not model only the morning or afternoon peak, and activity types include home, work, education, shopping, leisure and other activities. Providing a network is the second requirement. The network... the paper is structured as follows. In the next section we provide details on configuring the simulation. That entails addressing specific components in the configuration file, elaborating on activity types, and describing how the different simulation...

  6. Burnout of pulverized biomass particles in large scale boiler – Single particle model approach

    DEFF Research Database (Denmark)

    Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero

    2010-01-01

    the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner...... location and the trajectories of the particles might be optimised to maximise the residence time and burnout....

  7. Diet Activity Characteristic of Large-scale Sports Events Based on HACCP Management Model

    Directory of Open Access Journals (Sweden)

    Xiao-Feng Su

    2015-01-01

    Full Text Available The study proposed major sports events dietary management based on "HACCP" management model. According to the characteristic of major sports events catering activities. Major sports events are not just showcase level of competitive sports activities which have become comprehensive special events including social, political, economic, cultural and other factors, complex. Sporting events conferred reach more diverse goals and objectives of economic, political, cultural, technological and other influence and impact.

  8. A coordination model for ultra-large scale systems of systems

    Directory of Open Access Journals (Sweden)

    Manuela L. Bujorianu

    2013-11-01

    Full Text Available The ultra large multi-agent systems are becoming increasingly popular due to quick decay of the individual production costs and the potential of speeding up the solving of complex problems. Examples include nano-robots, or systems of nano-satellites for dangerous meteorite detection, or cultures of stem cells for organ regeneration or nerve repair. The topics associated with these systems are usually dealt within the theories of intelligent swarms or biologically inspired computation systems. Stochastic models play an important role and they are based on various formulations of the mechanical statistics. In these cases, the main assumption is that the swarm elements have a simple behaviour and that some average properties can be deduced for the entire swarm. In contrast, complex systems in areas like aeronautics are formed by elements with sophisticated behaviour, which are even autonomous. In situations like this, a new approach to swarm coordination is necessary. We present a stochastic model where the swarm elements are communicating autonomous systems, the coordination is separated from the component autonomous activity and the entire swarm can be abstracted away as a piecewise deterministic Markov process, which constitutes one of the most popular model in stochastic control. Keywords: ultra large multi-agent systems, system of systems, autonomous systems, stochastic hybrid systems.

  9. Time series modeling and large scale global solar radiation forecasting from geostationary satellites data

    CERN Document Server

    Voyant, Cyril; Muselli, Marc; Paoli, Christophe; Nivet, Marie Laure

    2014-01-01

    When a territory is poorly instrumented, geostationary satellites data can be useful to predict global solar radiation. In this paper, we use geostationary satellites data to generate 2-D time series of solar radiation for the next hour. The results presented in this paper relate to a particular territory, the Corsica Island, but as data used are available for the entire surface of the globe, our method can be easily exploited to another place. Indeed 2-D hourly time series are extracted from the HelioClim-3 surface solar irradiation database treated by the Heliosat-2 model. Each point of the map have been used as training data and inputs of artificial neural networks (ANN) and as inputs for two persistence models (scaled or not). Comparisons between these models and clear sky estimations were proceeded to evaluate the performances. We found a normalized root mean square error (nRMSE) close to 16.5% for the two best predictors (scaled persistence and ANN) equivalent to 35-45% related to ground measurements. F...

  10. Large-scale application of the flood damage model RAilway Infrastructure Loss (RAIL)

    Science.gov (United States)

    Kellermann, Patric; Schönberger, Christine; Thieken, Annegret H.

    2016-11-01

    Experience has shown that river floods can significantly hamper the reliability of railway networks and cause extensive structural damage and disruption. As a result, the national railway operator in Austria had to cope with financial losses of more than EUR 100 million due to flooding in recent years. Comprehensive information on potential flood risk hot spots as well as on expected flood damage in Austria is therefore needed for strategic flood risk management. In view of this, the flood damage model RAIL (RAilway Infrastructure Loss) was applied to estimate (1) the expected structural flood damage and (2) the resulting repair costs of railway infrastructure due to a 30-, 100- and 300-year flood in the Austrian Mur River catchment. The results were then used to calculate the expected annual damage of the railway subnetwork and subsequently analysed in terms of their sensitivity to key model assumptions. Additionally, the impact of risk aversion on the estimates was investigated, and the overall results were briefly discussed against the background of climate change and possibly resulting changes in flood risk. The findings indicate that the RAIL model is capable of supporting decision-making in risk management by providing comprehensive risk information on the catchment level. It is furthermore demonstrated that an increased risk aversion of the railway operator has a marked influence on flood damage estimates for the study area and, hence, should be considered with regard to the development of risk management strategies.

  11. Congestion management in power systems. Long-term modeling framework and large-scale application

    Energy Technology Data Exchange (ETDEWEB)

    Bertsch, Joachim; Hagspiel, Simeon; Just, Lisa

    2015-06-15

    In liberalized power systems, generation and transmission services are unbundled, but remain tightly interlinked. Congestion management in the transmission network is of crucial importance for the efficiency of these inter-linkages. Different regulatory designs have been suggested, analyzed and followed, such as uniform zonal pricing with redispatch or nodal pricing. However, the literature has either focused on the short-term efficiency of congestion management or specific issues of timing investments. In contrast, this paper presents a generalized and flexible economic modeling framework based on a decomposed inter-temporal equilibrium model including generation, transmission, as well as their inter-linkages. Short and long-term effects of different congestion management designs can hence be analyzed. Specifically, we are able to identify and isolate implicit frictions and sources of inefficiencies in the different regulatory designs, and to provide a comparative analysis including a benchmark against a first-best welfare-optimal result. To demonstrate the applicability of our framework, we calibrate and numerically solve our model for a detailed representation of the Central Western European (CWE) region, consisting of 70 nodes and 174 power lines. Analyzing six different congestion management designs until 2030, we show that compared to the first-best benchmark, i.e., nodal pricing, inefficiencies of up to 4.6% arise. Inefficiencies are mainly driven by the approach of determining cross-border capacities as well as the coordination of transmission system operators' activities.

  12. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I. (University of California, San Diego); Winey, J. Michael (Washington State University); Gupta, Yogendra Mohan (Washington State University); Lane, J. Matthew D.; Ditmire, Todd (University of Texas at Austin); Quevedo, Hernan J. (University of Texas at Austin)

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  13. A Review on Large Scale Graph Processing Using Big Data Based Parallel Programming Models

    Directory of Open Access Journals (Sweden)

    Anuraj Mohan

    2017-02-01

    Full Text Available Processing big graphs has become an increasingly essential activity in various fields like engineering, business intelligence and computer science. Social networks and search engines usually generate large graphs which demands sophisticated techniques for social network analysis and web structure mining. Latest trends in graph processing tend towards using Big Data platforms for parallel graph analytics. MapReduce has emerged as a Big Data based programming model for the processing of massively large datasets. Apache Giraph, an open source implementation of Google Pregel which is based on Bulk Synchronous Parallel Model (BSP is used for graph analytics in social networks like Facebook. This proposed work is to investigate the algorithmic effects of the MapReduce and BSP model on graph problems. The triangle counting problem in graphs is considered as a benchmark and evaluations are made on the basis of time of computation on the same cluster, scalability in relation to graph and cluster size, resource utilization and the structure of the graph.

  14. A Comprehensive and Adaptive Trust Model for Large-Scale P2P Networks

    Institute of Scientific and Technical Information of China (English)

    Xiao-Yong Li; Xiao-Lin Gui

    2009-01-01

    Based on human psychological cognitive behavior,a Comprehensive and Adaptive Trust(CAT) model for largescale P2P networks i8 proposed.Firstly,an adaptive trusted decision-making method based on HEW (Historical Evidences Window) is proposed,which can not only reduce the risk and improve system efficiency,but also solve the trust forecasting problem when the direct evidences are insufficient.Then,direct trust computing method based on IOWA(Induced Ordered Weighted Averaging)operator and feedback trust converging mechanism based on DTT(Direct Trust Tree)are set up,which makes the model have a better scalability than previous studies.At the same time,two new parameters,confidence factor and feedback factor,are introduced to assign the weights to direct trust and feedback trust adaptively,which overcomes the shortage of traditional method,in which the weights are assigned by subjective ways.Simulation results show that,compared to the existing approaches.the proposed model has remarkable enhancements in the accuracy of trust decision-making and has a better dynamic adaptation capability in handling various dynamic behaviors of peers.

  15. Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale simulations

    Science.gov (United States)

    Heng, Yi; Hoffmann, Lars; Griessbach, Sabine; Rößler, Thomas; Stein, Olaf

    2016-05-01

    An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often cannot be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC) is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2). In the inverse modeling system MPTRAC is used to perform two types of simulations, i.e., unit simulations for the reconstruction of volcanic emissions and final forward simulations. Both types of transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric InfraRed Sounder (AIRS) satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI) near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite instruments. The final forward simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. By using the critical success index (CSI), the simulation results are evaluated with the AIRS observations. Compared to the results with an assumption of a constant flux of SO2 emissions, our inversion approach leads to an improvement

  16. Modelling large-scale ice-sheet–climate interactions following glacial inception

    Directory of Open Access Journals (Sweden)

    J. M. Gregory

    2012-10-01

    Full Text Available We have coupled the FAMOUS global AOGCM (atmosphere-ocean general circulation model to the Glimmer thermomechanical ice-sheet model in order to study the development of ice-sheets in north-east America (Laurentia and north-west Europe (Fennoscandia following glacial inception. This first use of a coupled AOGCM–ice-sheet model for a study of change on long palæoclimate timescales is made possible by the low computational cost of FAMOUS, despite its inclusion of physical parameterisations similar in complexity to higher-resolution AOGCMs. With the orbital forcing of 115 ka BP, FAMOUS–Glimmer produces ice caps on the Canadian Arctic islands, on the north-west coast of Hudson Bay and in southern Scandinavia, which grow to occupy the Keewatin region of the Canadian mainland and all of Fennoscandia over 50 ka. Their growth is eventually halted by increasing coastal ice discharge. The expansion of the ice-sheets influences the regional climate, which becomes cooler, reducing the ablation, and ice accumulates in places that initially do not have positive surface mass balance. The results suggest the possibility that the glaciation of north-east America could have begun on the Canadian Arctic islands, producing a regional climate change that caused or enhanced the growth of ice on the mainland. The increase in albedo (due to snow and ice cover is the dominant feedback on the area of the ice-sheets and acts rapidly, whereas the feedback of topography on SMB does not become significant for several centuries, but eventually has a large effect on the thickening of the ice-sheets. These two positive feedbacks are mutually reinforcing. In addition, the change in topography perturbs the tropospheric circulation, producing some reduction of cloud, and mitigating the local cooling along the margin of the Laurentide ice-sheet. Our experiments demonstrate the importance and complexity of the interactions between ice-sheets and local climate.

  17. Modeling change from large-scale high-dimensional spatio-temporal array data

    Science.gov (United States)

    Lu, Meng; Pebesma, Edzer

    2014-05-01

    The massive data that come from Earth observation satellite and other sensors provide significant information for modeling global change. At the same time, the high dimensionality of the data has brought challenges in data acquisition, management, effective querying and processing. In addition, the output of earth system modeling tends to be data intensive and needs methodologies for storing, validation, analyzing and visualization, e.g. as maps. An important proportion of earth system observations and simulated data can be represented as multi-dimensional array data, which has received increasingly attention in big data management and spatial-temporal analysis. Study cases will be developed in natural science such as climate change, hydrological modeling, sediment dynamics, from which the addressing of big data problems is necessary. Multi-dimensional array-based database management and analytics system such as Rasdaman, SciDB, and R will be applied to these cases. From these studies will hope to learn the strengths and weaknesses of these systems, how they might work together or how semantics of array operations differ, through addressing the problems associated with big data. Research questions include: • How can we reduce dimensions spatially and temporally, or thematically? • How can we extend existing GIS functions to work on multidimensional arrays? • How can we combine data sets of different dimensionality or different resolutions? • Can map algebra be extended to an intelligible array algebra? • What are effective semantics for array programming of dynamic data driven applications? • In which sense are space and time special, as dimensions, compared to other properties? • How can we make the analysis of multi-spectral, multi-temporal and multi-sensor earth observation data easy?

  18. Modelling large-scale ice-sheet–climate interactions following glacial inception

    Directory of Open Access Journals (Sweden)

    J. M. Gregory

    2012-01-01

    Full Text Available We have coupled the FAMOUS global AOGCM (atmosphere–ocean general circulation model to the Glimmer thermomechanical ice-sheet model in order to study the development of ice-sheets in North-East America (Laurentia and North-West Europe (Fennoscandia following glacial inception. This first use of a coupled AOGCM-ice-sheet model for a study of change on long palæoclimate timescales is made possible by the low computational cost of FAMOUS, despite its inclusion of physical parameterisations of a similar complexity to those of higher-resolution AOGCMs. With the orbital forcing of 115 ka BP, FAMOUS-Glimmer produces ice-caps on the Canadian Arctic islands, on the north-west coast of Hudson Bay, and in Southern Scandinavia, which over 50 ka grow to occupy the Keewatin region of the Canadian mainland and all of Fennoscandia. Their growth is eventually halted by increasing coastal ice discharge. The expansion of the ice-sheets influences the regional climate, which becomes cooler, reducing the ablation, while precipitation increases. Ice accumulates in places that initially do not have positive surface mass balance. The results suggest the possibility that the Laurentide glaciation could have begun on the Canadian Arctic islands, producing a regional climate change that caused or enhanced the growth of ice on the mainland. The increase in albedo due to snow and ice cover is the dominant feedback on the area of the ice-sheets, and acts rapidly, whereas the feedback of topography on SMB does not become significant for several centuries, but eventually has a large effect on the thickening of the ice-sheets. These two positive feedbacks are mutually reinforcing. In addition the change in topography perturbs the tropospheric circulation, producing some reduction of cloud and mitigating the local cooling along the margin of the Laurentide ice-sheet. Our experiments demonstrate the importance and complexity of the interactions between ice-sheets and local

  19. Developing a Massively Parallel Forward Projection Radiography Model for Large-Scale Industrial Applications

    Energy Technology Data Exchange (ETDEWEB)

    Bauerle, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-08-01

    This project utilizes Graphics Processing Units (GPUs) to compute radiograph simulations for arbitrary objects. The generation of radiographs, also known as the forward projection imaging model, is computationally intensive and not widely utilized. The goal of this research is to develop a massively parallel algorithm that can compute forward projections for objects with a trillion voxels (3D pixels). To achieve this end, the data are divided into blocks that can each t into GPU memory. The forward projected image is also divided into segments to allow for future parallelization and to avoid needless computations.

  20. Ising model formulation of large scale dynamics universality in the universe

    CERN Document Server

    Goldman, T; Laflamme, R

    1995-01-01

    The partition function of a system of galaxies in gravitational interaction can be cast in an Ising Model form, and this reformulated via a Hubbard--Stratonovich transformation into a three dimensional stochastic and classical scalar field theory, whose critical exponents are calculable and known. This allows one to {\\it compute\\/} the galaxy to galaxy correlation function, whose non--integer exponent is predicted to be between 1.530 and 1.862, to be compared with the phenomenological value of 1.6 to 1.8.

  1. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  2. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    Science.gov (United States)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  3. Incremental N-mode SVD for large-scale multilinear generative models.

    Science.gov (United States)

    Lee, Minsik; Choi, Chong-Ho

    2014-10-01

    Tensor decomposition is frequently used in image processing and machine learning for its ability to express higher order characteristics of data. Among tensor decomposition methods, N-mode singular value decomposition (SVD) is widely used owing to its simplicity. However, the data dimension often becomes too large to perform N-mode SVD directly due to memory limitation. An incremental method to N-mode SVD can be used to resolve this issue, but existing approaches only provide a result, which is just enough to solve discriminative problems, not the full factorization result. In this paper, we present a complete derivation of the incremental N-mode SVD, which can be applied to generative models, accompanied by a technique that can reduce the computational cost by reordering calculations. The proposed incremental N-mode SVD can also be used effectively to update the current result of N-mode SVD when new training data is received. The proposed method provides a very good approximation of N -mode SVD for the experimental data, and requires much less computation in updating a multilinear model.

  4. Nonlinear Model-Based Predictive Control applied to Large Scale Cryogenic Facilities

    CERN Document Server

    Blanco Vinuela, Enrique; de Prada Moraga, Cesar

    2001-01-01

    The thesis addresses the study, analysis, development, and finally the real implementation of an advanced control system for the 1.8 K Cooling Loop of the LHC (Large Hadron Collider) accelerator. The LHC is the next accelerator being built at CERN (European Center for Nuclear Research), it will use superconducting magnets operating below a temperature of 1.9 K along a circumference of 27 kilometers. The temperature of these magnets is a control parameter with strict operating constraints. The first control implementations applied a procedure that included linear identification, modelling and regulation using a linear predictive controller. It did improve largely the overall performance of the plant with respect to a classical PID regulator, but the nature of the cryogenic processes pointed out the need of a more adequate technique, such as a nonlinear methodology. This thesis is a first step to develop a global regulation strategy for the overall control of the LHC cells when they will operate simultaneously....

  5. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    . Energy systems often involve stochastic variables due to the share of fluctuating Renewable Energy Sources (RESs). Moreover, the related control problems are multi variables and they are hard, or impossible, to split into single-input-single-output control systems. MPC strategy can handle multi variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...... Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC...

  6. The Large-Scale Debris Avalanche From The Tancitaro Volcano (Mexico): Characterization And Modeling

    Science.gov (United States)

    Morelli, S.; Gigli, G.; Falorni, G.; Garduno Monroy, V. H.; Arreygue, E.

    2008-12-01

    until they disappear entirely in the most distal reaches. The granulometric analysis and the comparison between the debris avalanche of the Tancitaro and other collapses with similar morphometric features (vertical relief during runout, travel distance, volume and area of the deposit) indicate that the collapse was most likely not primed by any type of eruption, but rather triggered by a strong seismic shock that could have induced the failure of a portion of the edifice, already deeply altered by intense hydrothermal fluid circulation. It is also possible to hypothesize that mechanical fluidization may have been the mechanism controlling the long runout of the avalanche, as has been determined for other well-known events. The behavior of the Tancitaro debris avalanche was numerically modeled using the DAN-W code. By opportunely modifying the rheological parameters of the different models selectable within DAN, it was determined that the two-parameter 'Voellmy model' provides the best approximation of the avalanche movement. The Voellmy model produces the most realistic results in terms of runout distance, velocity and spatial distribution of the failed mass. Since the Tancitaro event was not witnessed directly, it is possible to infer approximate velocities only from comparisons with similar and documented events, namely the Mt. St. Helens debris avalanche occurred on May 18, 1980.

  7. A model for large-scale, interprofessional, compulsory cross-cultural education with an indigenous focus.

    Science.gov (United States)

    Kickett, Marion; Hoffman, Julie; Flavell, Helen

    2014-01-01

    Cultural competency training for health professionals is now a recognised strategy to address health disparities between minority and white populations in Western nations. In Australia, urgent action is required to "Close the Gap" between the health outcomes of Indigenous Australians and the dominant European population, and significantly, cultural competency development for health professionals has been identified as an important element to providing culturally safe care. This paper describes a compulsory interprofessional first-year unit in a large health sciences faculty in Australia, which aims to begin students on their journey to becoming culturally competent health professionals. Reporting primarily on qualitative student feedback from the unit's first year of implementation as well as the structure, learning objects, assessment, and approach to coordinating the unit, this paper provides a model for implementing quality wide-scale, interprofessional cultural competence education within a postcolonial context. Critical factors for the unit's implementation and ongoing success are also discussed.

  8. A simple model for large-scale simulations of fcc metals with explicit treatment of electrons

    Science.gov (United States)

    Mason, D. R.; Foulkes, W. M. C.; Sutton, A. P.

    2010-01-01

    The continuing advance in computational power is beginning to make accurate electronic structure calculations routine. Yet, where physics emerges through the dynamics of tens of thousands of atoms in metals, simplifications must be made to the electronic Hamiltonian. We present the simplest extension to a single s-band model [A.P. Sutton, T.N. Todorov, M.J. Cawkwell and J. Hoekstra, Phil. Mag. A 81 (2001) p.1833.] of metallic bonding, namely, the addition of a second s-band. We show that this addition yields a reasonable description of the density of states at the Fermi level, the cohesive energy, formation energies of point defects and elastic constants of some face-centred cubic (fcc) metals.

  9. Reduced Order Modeling for Prediction and Control of Large-Scale Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Kalashnikova, Irina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computational Mathematics; Arunajatesan, Srinivasan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Aerosciences Dept.; Barone, Matthew Franklin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Aerosciences Dept.; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Uncertainty Quantification and Optimization Dept.; Fike, Jeffrey A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Component Science and Mechanics Dept.

    2014-05-01

    This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier

  10. Doming at large scale on Europa: a model of formation of Thera Macula

    Science.gov (United States)

    Mével, L.; Tobie, G.; Mercier, E.; Sotin, C.

    2003-04-01

    Thera Macula is an approximately 140 by 80 km elliptical feature of the southern hemisphere of Europa. Our morphological analysis shows two types of terrains. The north west part is weakly disturbed and only some cuesta-like structures are recognized. Nevertheless, the south east part looks like a chaotic area similar to Conamara Chaos with ice overflowing on the southern margin. The chaotic terrains have a lower elevation than the weakly disturbed terrains. Both units are separated by a steep scarp cutting across the middle of Thera Macula. This dichotomy may reflect the processes by which Thera was build. Detailed observation of the chaotic area reveals the presence of little sinuous scarps limiting terraces lying at different elevations. We have calculated the cumulated height along a N-S profile and deduced a mean regional slope ranging from 0.2% to 0.8% along the entire profile. On the basis of these morphological arguments, we purpose an original model for the emplacement of Thera Macula. The rise of ductile or liquid material beneath an inclined brittle icy crust may induce vertical upward, doming, and a median fracture. Then, the soft material may overflow alongside the regional slope and the dome may collapse as the reservoir empties out. In order to constrain this emplacement model, we are currently performing numerical experiments of thermal convection for a fluid with a strongly temperature-dependent viscosity, including tidal heating and damage rheology. Preliminary results suggest that, although a thick stagnant lid forms at the top of a convective ice layer, damaged icy material in this rigid lid permits the rise of warm ductile ice at shallow depth. This could explain both doming and softening of the crustal material.

  11. Enhanced Geometric Map:a 2D & 3D Hybrid City Model of Large Scale Urban Environment for Robot Navigation

    Institute of Scientific and Technical Information of China (English)

    LI Haifeng; HU Zunhe; LIU Jingtai

    2016-01-01

    To facilitate scene understanding and robot navigation in large scale urban environment, a two-layer enhanced geometric map (EGMap) is designed using videos from a monocular onboard camera. The 2D layer of EGMap consists of a 2D building boundary map from top-down view and a 2D road map, which can support localization and advanced map-matching when compared with standard polyline-based maps. The 3D layer includes features such as 3D road model, and building facades with coplanar 3D vertical and horizontal line segments, which can provide the 3D metric features to localize the vehicles and flying-robots in 3D space. Starting from the 2D building boundary and road map, EGMap is initially constructed using feature fusion with geometric constraints under a line feature-based simultaneous localization and mapping (SLAM) framework iteratively and progressively. Then, a local bundle adjustment algorithm is proposed to jointly refine the camera localizations and EGMap features. Furthermore, the issues of uncertainty, memory use, time efficiency and obstacle effect in EGMap construction are discussed and analyzed. Physical experiments show that EGMap can be successfully constructed in large scale urban environment and the construction method is demonstrated to be very accurate and robust.

  12. Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents

    Science.gov (United States)

    Madankan, Reza

    All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions

  13. Believe it or not? The challenge of validating large scale probabilistic risk models

    Directory of Open Access Journals (Sweden)

    Sayers Paul

    2016-01-01

    Full Text Available The National Flood Risk Assessment (NaFRA for England and Wales was initially undertaken in 2002 with frequent updates since. NaFRA has become a key source of information on flood risk, informing policy and investment decisions as well as communicating risk to the public and insurers. To make well informed decisions based on these data, users rightfully demand to know the confidence they can place in them. The probability of inundation and associated damage however cannot be validated in the traditional sense, due the rare and random nature of damaging floods and the lack of a long (and widespread stationary observational record (reflecting not only changes in climate but also the significant changes in land use and flood defence infrastructure that are likely to have occurred. To explore the validity of NaFRA this paper therefore provides a bottom-up qualitative exploration of the potential errors within the supporting methods and data. The paper concludes by underlining the need for further research to understand how to robustly validate probabilistic risk models.

  14. Large scale flow visualization and anemometry applied to lab on chip models of porous media

    CERN Document Server

    Paiola, Johan; Bodiguel, Hugues

    2016-01-01

    The following is a report on an experimental technique allowing to quantify and map the velocity field with a very high resolution and a simple equipment in large 2D devices. A simple Shlieren technique is proposed to reinforce the contrast in the images and allow you to detect seeded particles that are pixel-sized or even inferior to it. The velocimetry technique that we have reported on is based on auto-correlation functions of the pixel intensity, which we have shown are directly related to the magnitude of the local average velocity. The characteristic time involved in the decorrelation of the signal is proportional to the tracer size and inversely proportional to the average velocity. We have reported on a detailed discussion about the optimization of relevant involved parameters, the spatial resolution and the accuracy of the method. The technique is then applied to a model porous media made of a random channel network. We show that it is highly efficient to determine the magnitude of the flow in each o...

  15. Real-time Photorealistic Visualisation of Large-scaleMultiresolution Terrain Models

    Directory of Open Access Journals (Sweden)

    Anupam Agrawal

    2007-01-01

    Full Text Available Height field terrain rendering is an important aspect of GIS, outdoor virtual reality applicationssuch as flight simulation, 3-D games, etc. A polygonal model of very large terrain data requiresa large number of triangles. So, even most high-performance graphics workstations have greatdifficulty to display even moderately sized height fields at interactive frame rates. To bringphotorealism in visualisation, it is required to drape corresponding high-resolution satellite oraerial phototexture over 3-D digital terrain and also to place multiple collections of point-location-based static objects such as buildings, trees, etc and to overlay polyline vector objects suchas roads on top of the terrain surface. It further complicates the requirement of interactive framerates while navigation over the terrain. This paper describes a novel approach for objects andterrain visualisation by combination of two algorithms, one for terrain data and the other forobjects. The terrain rendering is accomplished by an efficient dynamic multiresolution view-dependent level-of-detail mesh simplification algorithm. It is augmented with out-of-corevisualisation of large-height geometry and phototexture terrain data populated with 3-D/2-Dstatic objects as well as vector overlays without extensive memory load. The proposedmethodology provides interactive frame rates on a general-purpose desktop PC with OpenGL-enabled graphics hardware. The software TREND has been successfully tested on different real-world height maps and satellite phototextures of sizes up to 16K*16K coupled with thousandsof static objects and polyline vector overlays.

  16. Large-scale screening of disease model through ENU mutagenesis in mice

    Institute of Scientific and Technical Information of China (English)

    HE Fang; WANG Zixing; ZHAO Jing; BAO Jie; DING Jun; RUAN Haibin; XIE Qing; ZHANG Zuoming; GAO Xiang

    2003-01-01

    Manipulation of mouse genome has merged as one of the most important approaches for studying gene function and establishing the disease model because of the high homology between human genome and mouse genome. In this study, the chemical mutagen ethylnitrosourea (ENU) was employed for inducing germ cell mutations in male C57BL/6J mice. The first generation (G1) of the backcross of these mutated mice, totally 3172, was screened for abnormal phenotypes on gross morphology, behavior, learning and memory, auditory brainstem response (ABR), electrocardiogram (ECG), electroretinogram (ERG), flash-visual evoked potential (F-VEP), bone mineral density, and blood sugar level. 595 mice have been identified with specific dominant abnormalities. Fur color changes, eye defects and hearing loss occurred at the highest frequency. Abnormalities related to metabolism alteration are least frequent. Interestingly, eye defects displayed significant left-right asymmetry and sex preference. Sex preference is also observed in mice with abnormal bone mineral density. Among 104 G1 generation mutant mice examined for inheritability, 14 of them have been confirmed for passing abnormal phenotypes to their progenies. However, we did not observe behavior abnormalities of G1 mice to be inheritable, suggesting multi-gene control for these complicated functions in mice. In conclusion, the generation of these mutants paves the way for understanding molecular and cellular mechanisms of these abnormal phenotypes, and accelerates the cloning of disease-related genes.

  17. The periglacial engine of mountain erosion - Part 2: Modelling large-scale landscape evolution

    Science.gov (United States)

    Egholm, D. L.; Andersen, J. L.; Knudsen, M. F.; Jansen, J. D.; Nielsen, S. B.

    2015-10-01

    There is growing recognition of strong periglacial control on bedrock erosion in mountain landscapes, including the shaping of low-relief surfaces at high elevations (summit flats). But, as yet, the hypothesis that frost action was crucial to the assumed Late Cenozoic rise in erosion rates remains compelling and untested. Here we present a landscape evolution model incorporating two key periglacial processes - regolith production via frost cracking and sediment transport via frost creep - which together are harnessed to variations in temperature and the evolving thickness of sediment cover. Our computational experiments time-integrate the contribution of frost action to shaping mountain topography over million-year timescales, with the primary and highly reproducible outcome being the development of flattish or gently convex summit flats. A simple scaling of temperature to marine δ18O records spanning the past 14 Myr indicates that the highest summit flats in mid- to high-latitude mountains may have formed via frost action prior to the Quaternary. We suggest that deep cooling in the Quaternary accelerated mechanical weathering globally by significantly expanding the area subject to frost. Further, the inclusion of subglacial erosion alongside periglacial processes in our computational experiments points to alpine glaciers increasing the long-term efficiency of frost-driven erosion by steepening hillslopes.

  18. Modeling the Economic Feasibility of Large-Scale Net-Zero Water Management: A Case Study.

    Science.gov (United States)

    Guo, Tianjiao; Englehardt, James D; Fallon, Howard J

      While municipal direct potable water reuse (DPR) has been recommended for consideration by the U.S. National Research Council, it is unclear how to size new closed-loop DPR plants, termed "net-zero water (NZW) plants", to minimize cost and energy demand assuming upgradient water distribution. Based on a recent model optimizing the economics of plant scale for generalized conditions, the authors evaluated the feasibility and optimal scale of NZW plants for treatment capacity expansion in Miami-Dade County, Florida. Local data on population distribution and topography were input to compare projected costs for NZW vs the current plan. Total cost was minimized at a scale of 49 NZW plants for the service population of 671,823. Total unit cost for NZW systems, which mineralize chemical oxygen demand to below normal detection limits, is projected at ~$10.83 / 1000 gal, approximately 13% above the current plan and less than rates reported for several significant U.S. cities.

  19. The role of soil hydrologic heterogeneity for modeling large-scale bioremediation protocols.

    Science.gov (United States)

    Romano, N.; Palladino, M.; Speranza, G.; Di Fiore, P.; Sica, B.; Nasta, P.

    2014-12-01

    The major aim of the EU-Life+ project EcoRemed (Implementation of eco-compatible protocols for agricultural soil remediation in Litorale Domizio-Agro Aversano NIPS) is the implementation of operating protocols for agriculture-based bioremediation of contaminated croplands, which also involves plants extracting pollutants being then used as biomasses for renewable energy production. The study area is the National Interest Priority Site (NIPS) called Litorale Domitio-Agro Aversano, which is located in the Campania Region (Southern Italy) and has an extent of about 200,000 ectars. In this area, a high-level spotted soil contamination is mostly due to the legal or outlaw industrial and municipal wastes, with hazardous consequences also on the quality of the groundwater. An accurate determination of the soil hydraulic properties to characterize the landscape heterogeneity of the study area plays a key role within the general framework of this project, especially in view of the use of various modeling tools for water flow and solute transport simulations and to predict the effectiveness of the adopted bioremediation protocols. The present contribution is part of an ongoing study where we are investigating the following research questions: a) Which spatial aggregation schemes seem more suitable for upscaling from point to block support? b) Which effective soil hydrologic characteristic schemes simulate better the average behavior of larger scale phytoremediation processes? c) Allowing also for questions a) and b), how the spatial variability of soil hydraulic properties affect the variability of plant responses to hydro-meteorological forcing?

  20. Experiments on vertical transverse mixing in a large-scale heterogeneous model aquifer

    Science.gov (United States)

    Rahman, Md. Arifur; Jose, Surabhin C.; Nowak, Wolfgang; Cirpka, Olaf A.

    2005-11-01

    Vertical transverse mixing is known to be a controlling factor in natural attenuation of extended biodegradable plumes originating from continuously emitting sources. We perform conservative and reactive tracer tests in a quasi two-dimensional 14 m long sandbox in order to quantify vertical mixing in heterogeneous media. The filling mimics natural sediments including a distribution of different hydro-facies, made of different sand mixtures, and micro-structures within the sand lenses. We quantify the concentration distribution of the conservative tracer by the analysis of digital images taken at steady state during the tracer-dye experiment. Heterogeneity causes plume meandering, leading to distorted concentration profiles. Without knowledge about the velocity distribution, it is not possible to determine meaningful vertical dispersion coefficients from the concentration profiles. Using the stream-line pattern resulting from an inverse model of previous experiments in the sandbox, we can correct for the plume meandering. The resulting vertical dispersion coefficient is approximately ≈ 4 × 10 - 9 m 2/s. We observe no distinct increase in the vertical dispersion coefficient with increasing travel distance, indicating that heterogeneity has hardly any impact on vertical transverse mixing. In the reactive tracer test, we continuously inject an alkaline solution over a certain height into the domain that is occupied otherwise by an acidic solution. The outline of the alkaline plume is visualized by adding a pH indicator into both solutions. From the height and length of the reactive plume, we estimate a transverse dispersion coefficient of ≈ 3 × 10 - 9 m 2/s. Overall, the vertical transverse dispersion coefficients are less than an order of magnitude larger than pore diffusion coefficients and hardly increase due to heterogeneity. Thus, we conclude for the assessment of natural attenuation that reactive plumes might become very large if they are controlled by

  1. Trade-off between cost and accuracy in large-scale surface water dynamic modeling

    Science.gov (United States)

    Getirana, Augusto; Peters-Lidard, Christa; Rodell, Matthew; Bates, Paul D.

    2017-06-01

    Recent efforts have led to the development of the local inertia formulation (INER) for an accurate but still cost-efficient representation of surface water dynamics, compared to the widely used kinematic wave equation (KINE). In this study, both formulations are evaluated over the Amazon basin in terms of computational costs and accuracy in simulating streamflows and water levels through synthetic experiments and comparisons against ground-based observations. Varying time steps are considered as part of the evaluation and INER at 60 s time step is adopted as the reference for synthetic experiments. Five hybrid (HYBR) realizations are performed based on maps representing the spatial distribution of the two formulations that physically represent river reach flow dynamics within the domain. Maps have fractions of KINE varying from 35.6 to 82.8%. KINE runs show clear deterioration along the Amazon river and main tributaries, with maximum RMSE values for streamflow and water level reaching 7827 m3 s-1 and 1379 cm near the basin's outlet. However, KINE is at least 25% more efficient than INER with low model sensitivity to longer time steps. A significant improvement is achieved with HYBR, resulting in maximum RMSE values of 3.9-292 m3 s-1 for streamflows and 1.1-28.5 cm for water levels, and cost reduction of 6-16%, depending on the map used. Optimal results using HYBR are obtained when the local inertia formulation is used in about one third of the Amazon basin, reducing computational costs in simulations while preserving accuracy. However, that threshold may vary when applied to different regions, according to their hydrodynamics and geomorphological characteristics.

  2. Non-local bias and the problem of large-scale power in the Standard Cold Dark Matter model

    CERN Document Server

    Popolo, A D; Kiuchi, H; Gambera, M

    1999-01-01

    We study the effect of non-radial motions, originating from the gravitational interaction of the quadrupole moment of a protogalaxy with the tidal field of the matter of the neighboring protostructures, on the angular correlation function of galaxies. We calculate the angular correlation function using a Standard Cold Dark Matter (hereafter SCDM) model (Omega=1, h=0.5, n=1) and we compare it with the angular correlation function of the APM galaxy survey (Maddox et al. 1990; Maddox et al. 1996). We find that taking account of non-radial motions in the calculation of the angular correlation function gives a better agreement of the theoretical prediction of the SCDM model to the observed estimates of large-scale power in the galaxy distribution.

  3. A Preliminary Model Study of the Large-Scale Seasonal Cycle in Bottom Pressure Over the Global Ocean

    Science.gov (United States)

    Ponte, Rui M.

    1998-01-01

    Output from the primitive equation model of Semtner and Chervin is used to examine the seasonal cycle in bottom pressure (Pb) over the global ocean. Effects of the volume-conserving formulation of the model on the calculation Of Pb are considered. The estimated seasonal, large-scale Pb signals have amplitudes ranging from less than 1 cm over most of the deep ocean to several centimeters over shallow, boundary regions. Variability generally increases toward the western sides of the basins, and is also larger in some Southern Ocean regions. An oscillation between subtropical and higher latitudes in the North Pacific is clear. Comparison with barotropic simulations indicates that, on basin scales, seasonal Pb variability is related to barotropic dynamics and the seasonal cycle in Ekman pumping, and results from a small, net residual in mass divergence from the balance between Ekman and Sverdrup flows.

  4. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    Science.gov (United States)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  5. Self-consistent three-dimensional modeling and simulation of large-scale rectangular surface-wave plasma source

    Institute of Scientific and Technical Information of China (English)

    Lan Chao-Hui; Lan Chao-Zhen; Hu Xi-Wei; Chen Zhao-Quan; Liu Ming-Hai

    2009-01-01

    A self-consistent and three-dimensional (3D) model of argon discharge in a large-scale rectangular surface-wave plasma (SWP) source is presented in this paper, which is based on the finite-difference time-domain (FDTD) approximation to Maxwell's equations self-consistently coupled with a fluid model for plasma evolution. The discharge characteristics at an input microwave power of 1200 W and a filling gas pressure of 50 Pa in the SWP source are analyzed. The simulation shows the time evolution of deposited power density at different stages, and the 3D distributions of electron density and temperature in the chamber at steady state. In addition, the results show that there is a peak of plasma density approximately at a vertical distance of 3 cm from the quartz window.

  6. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    Science.gov (United States)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick

    2014-01-01

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  7. Physics-based animation of large-scale splashing liquids, elastoplastic solids, and model-reduced flow

    Science.gov (United States)

    Gerszewski, Daniel James

    Physical simulation has become an essential tool in computer animation. As the use of visual effects increases, the need for simulating real-world materials increases. In this dissertation, we consider three problems in physics-based animation: large-scale splashing liquids, elastoplastic material simulation, and dimensionality reduction techniques for fluid simulation. Fluid simulation has been one of the greatest successes of physics-based animation, generating hundreds of research papers and a great many special effects over the last fifteen years. However, the animation of large-scale, splashing liquids remains challenging. We show that a novel combination of unilateral incompressibility, mass-full FLIP, and blurred boundaries is extremely well-suited to the animation of large-scale, violent, splashing liquids. Materials that incorporate both plastic and elastic deformations, also referred to as elastioplastic materials, are frequently encountered in everyday life. Methods for animating such common real-world materials are useful for effects practitioners and have been successfully employed in films. We describe a point-based method for animating elastoplastic materials. Our primary contribution is a simple method for computing the deformation gradient for each particle in the simulation. Given the deformation gradient, we can apply arbitrary constitutive models and compute the resulting elastic forces. Our method has two primary advantages: we do not store or compare to an initial rest configuration and we work directly with the deformation gradient. The first advantage avoids poor numerical conditioning and the second naturally leads to a multiplicative model of deformation appropriate for finite deformations. One of the most significant drawbacks of physics-based animation is that ever-higher fidelity leads to an explosion in the number of degrees of freedom. This problem leads us to the consideration of dimensionality reduction techniques. We present

  8. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Drohmann, Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Tuminaro, Raymond S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Computational Mathematics; Boggs, Paul T. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Optimization and Uncertainty Estimation

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model

  9. Large-scale Flood Simulation with Rainfall-Runoff-Inundation Model in the Chao Phraya River Basin

    Science.gov (United States)

    Sayama, Takahiro; Tatebe, Yuya; Tanaka, Shigenobu

    2013-04-01

    A large amount of rainfall during the 2011 monsoonal season caused an unprecedented flood disaster in the Chao Phraya River basin in Thailand. When a large-scale flood occurs, it is very important to take appropriate emergency measures by holistically understanding the characteristics of the flooding based on available information and by predicting its possible development. This paper proposes quick response-type flood simulation that can be conducted during a severe flooding event. The hydrologic simulation model used in this study is designed to simulate river discharges and flood inundation simultaneously for an entire river basin with satellite based rainfall and topographic information. The model is based on two-dimensional diffusive wave equations for rainfall-runoff and inundation calculations. The model takes into account the effects of lateral subsurface flow and vertical infiltration flow since these two types of flow are also important processes. This paper presents prediction results obtained in mid-October 2011, when the flooding in Thailand was approaching to its peak. Our scientific question is how well we can predict the possible development of a large-scale flooding event with limited information and how much we can improve the prediction with more local information. In comparison with a satellite based flood inundation map, the study found that the quick response-type simulation (Lv1) was capable of capturing the peak flood inundation extent reasonably as compared to the estimation based on satellite remote sensing. Our interpretation of the prediction was that the flooding might continue even until the end of November, which was also positively confirmed to some extent by the actual flooding status in late November. Nevertheless, the Lv1 simulation generally overestimated the peak water level. To address this overestimation, the input data was updated with additional local information (Lv2). Consequently, the simulation accuracy improved in the

  10. Calibration of a large-scale hydrological model using satellite-based soil moisture and evapotranspiration products

    Directory of Open Access Journals (Sweden)

    P. López López

    2017-06-01

    Full Text Available A considerable number of river basins around the world lack sufficient ground observations of hydro-meteorological data for effective water resources assessment and management. Several approaches can be developed to increase the quality and availability of data in these poorly gauged or ungauged river basins; among them, the use of Earth observations products has recently become promising. Earth observations of various environmental variables can be used potentially to increase knowledge about the hydrological processes in the basin and to improve streamflow model estimates, via assimilation or calibration. The present study aims to calibrate the large-scale hydrological model PCRaster GLOBal Water Balance (PCR-GLOBWB using satellite-based products of evapotranspiration and soil moisture for the Moroccan Oum er Rbia River basin. Daily simulations at a spatial resolution of 5  ×  5 arcmin are performed with varying parameters values for the 32-year period 1979–2010. Five different calibration scenarios are inter-compared: (i reference scenario using the hydrological model with the standard parameterization, (ii calibration using in situ-observed discharge time series, (iii calibration using the Global Land Evaporation Amsterdam Model (GLEAM actual evapotranspiration time series, (iv calibration using ESA Climate Change Initiative (CCI surface soil moisture time series and (v step-wise calibration using GLEAM actual evapotranspiration and ESA CCI surface soil moisture time series. The impact on discharge estimates of precipitation in comparison with model parameters calibration is investigated using three global precipitation products, including ERA-Interim (EI, WATCH Forcing methodology applied to ERA-Interim reanalysis data (WFDEI and Multi-Source Weighted-Ensemble Precipitation data by merging gauge, satellite and reanalysis data (MSWEP. Results show that GLEAM evapotranspiration and ESA CCI soil moisture may be used for model

  11. Calibration of a large-scale hydrological model using satellite-based soil moisture and evapotranspiration products

    Science.gov (United States)

    López López, Patricia; Sutanudjaja, Edwin H.; Schellekens, Jaap; Sterk, Geert; Bierkens, Marc F. P.

    2017-06-01

    A considerable number of river basins around the world lack sufficient ground observations of hydro-meteorological data for effective water resources assessment and management. Several approaches can be developed to increase the quality and availability of data in these poorly gauged or ungauged river basins; among them, the use of Earth observations products has recently become promising. Earth observations of various environmental variables can be used potentially to increase knowledge about the hydrological processes in the basin and to improve streamflow model estimates, via assimilation or calibration. The present study aims to calibrate the large-scale hydrological model PCRaster GLOBal Water Balance (PCR-GLOBWB) using satellite-based products of evapotranspiration and soil moisture for the Moroccan Oum er Rbia River basin. Daily simulations at a spatial resolution of 5 × 5 arcmin are performed with varying parameters values for the 32-year period 1979-2010. Five different calibration scenarios are inter-compared: (i) reference scenario using the hydrological model with the standard parameterization, (ii) calibration using in situ-observed discharge time series, (iii) calibration using the Global Land Evaporation Amsterdam Model (GLEAM) actual evapotranspiration time series, (iv) calibration using ESA Climate Change Initiative (CCI) surface soil moisture time series and (v) step-wise calibration using GLEAM actual evapotranspiration and ESA CCI surface soil moisture time series. The impact on discharge estimates of precipitation in comparison with model parameters calibration is investigated using three global precipitation products, including ERA-Interim (EI), WATCH Forcing methodology applied to ERA-Interim reanalysis data (WFDEI) and Multi-Source Weighted-Ensemble Precipitation data by merging gauge, satellite and reanalysis data (MSWEP). Results show that GLEAM evapotranspiration and ESA CCI soil moisture may be used for model calibration resulting in

  12. Large-scale shell-model analysis of the neutrinoless $\\beta\\beta$ decay of $^{48}$Ca

    CERN Document Server

    Iwata, Y; Otsuka, T; Utsuno, Y; Menendez, J; Honma, M; Abe, T

    2016-01-01

    We present the nuclear matrix element for the neutrinoless double-beta decay of $^{48}$Ca based on large-scale shell-model calculations including two harmonic oscillator shells ($sd$ and $pf$ shells). The excitation spectra of $^{48}$Ca and $^{48}$Ti, and the two-neutrino double-beta decay of $^{48}$Ca are reproduced in good agreement to experiment. We find that the neutrinoless double-beta decay nuclear matrix element is enhanced by about 30\\% compared to $pf$-shell calculations. This reduces the decay lifetime by almost a factor of two. The matrix-element increase is mostly due to pairing correlations associated with cross-shell $sd$-$pf$ excitations. We also investigate possible implications for heavier neutrinoless double-beta decay candidates.

  13. Co-evolution of intelligent socio-technical systems modelling and applications in large scale emergency and transport domains

    CERN Document Server

    2013-01-01

    As the interconnectivity between humans through technical devices is becoming ubiquitous, the next step is already in the making: ambient intelligence, i.e. smart (technical) environments, which will eventually play the same active role in communication as the human players, leading to a co-evolution in all domains where real-time communication is essential. This topical volume, based on the findings of the Socionical European research project, gives equal attention to two highly relevant domains of applications: transport, specifically traffic, dynamics from the viewpoint of a socio-technical interaction and evacuation scenarios for large-scale emergency situations. Care was taken to investigate as much as possible the limits of scalability and to combine the modeling using complex systems science approaches with relevant data analysis.

  14. Long-term modelling of Carbon Capture and Storage, Nuclear Fusion, and large-scale District Heating

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik; Korsholm, Søren Bang; Lüthje, Mikael

    2011-01-01

    Among the technologies for mitigating greenhouse gasses, carbon capture and storage (CCS) and nuclear fusion are interesting in the long term. In several studies with time horizon 2050 CCS has been identified as an important technology, while nuclear fusion cannot become commercially available...... on nuclear fusion and the Pan European TIMES model, respectively. In the next decades CCS can be a driver for the development and expansion of large-scale district heating systems, which are currently widespread in Europe, Korea and China, and with large potentials in North America. If fusion will replace...... fossil fuel power plants with CCS in the second half of the century, the same infrastructure for heat distribution can be used which will support the penetration of both technologies. This paper will address the issue of infrastructure development and the use of CCS and fusion technologies using...

  15. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  16. Large Scale Earth’s Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model

    Indian Academy of Sciences (India)

    Suleiman Baraka

    2016-06-01

    In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth’s bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth’s bow shock position is found to be $\\approx 14.8 R_{{\\rm E}}$ along the Sun–Earth line, and $\\approx 29 R_{{\\rm E}}$ on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured $\\approx$2$c/\\omega_{pi}$ for $ \\Theta_{Bn}=90^{\\circ}$ and $M_{{\\rm MS}} = 4.7 $) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be ($1.7 c/ \\omega_{pi} $). In the foreshocked region, the thermal velocity is found equal to 213 km $s^{−1}$ at $15R_{{\\rm E}}$ and is equal to $63 km s^{-1}$ at $12 R_{{\\rm E}}$ (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.

  17. Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    Science.gov (United States)

    Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Malone, Adam M.; Paul, Benard P., Jr.; Woodard, Brian S.

    2016-01-01

    Icing simulation tools and computational fluid dynamics codes are reaching levels of maturity such that they are being proposed by manufacturers for use in certification of aircraft for flight in icing conditions with increasingly less reliance on natural-icing flight testing and icing-wind-tunnel testing. Sufficient high-quality data to evaluate the performance of these tools is not currently available. The objective of this work was to generate a database of ice-accretion geometry that can be used for development and validation of icing simulation tools as well as for aerodynamic testing. Three large-scale swept wing models were built and tested at the NASA Glenn Icing Research Tunnel (IRT). The models represented the Inboard (20% semispan), Midspan (64% semispan) and Outboard stations (83% semispan) of a wing based upon a 65% scale version of the Common Research Model (CRM). The IRT models utilized a hybrid design that maintained the full-scale leading-edge geometry with a truncated afterbody and flap. The models were instrumented with surface pressure taps in order to acquire sufficient aerodynamic data to verify the hybrid model design capability to simulate the full-scale wing section. A series of ice-accretion tests were conducted over a range of total temperatures from -23.8 deg C to -1.4 deg C with all other conditions held constant. The results showed the changing ice-accretion morphology from rime ice at the colder temperatures to highly 3-D scallop ice in the range of -11.2 deg C to -6.3 deg C. Warmer temperatures generated highly 3-D ice accretion with glaze ice characteristics. The results indicated that the general scallop ice morphology was similar for all three models. Icing results were documented for limited parametric variations in angle of attack, drop size and cloud liquid-water content (LWC). The effect of velocity on ice accretion was documented for the Midspan and Outboard models for a limited number of test cases. The data suggest that

  18. The effects of physiologically plausible connectivity structure on local and global dynamics in large scale brain models.

    NARCIS (Netherlands)

    Knock, S.A.; McIntosh, A.R.; Sporns, O.; Kotter, R.; Hagmann, P.; Jirsa, V.K.

    2009-01-01

    Functionally relevant large scale brain dynamics operates within the framework imposed by anatomical connectivity and time delays due to finite transmission speeds. To gain insight on the reliability and comparability of large scale brain network simulations, we investigate the effects of variations

  19. The Large-Scale Ocean Dynamical Effect on uncertainty in the Tropical Pacific SST Warming Pattern in CMIP5 Models

    Science.gov (United States)

    Ying, Jun; Huang, Ping

    2017-04-01

    This study investigates how intermodel differences in large-scale ocean dynamics affect the tropical Pacific sea surface temperature (SST) warming (TPSW) pattern under global warming, as projected by 32 models from phase 5 of the Coupled Model Intercomparison Project (CMIP5). The largest cause of intermodel TPSW pattern differences is related to the cloud-radiation feedback. After removing the effect of cloud-radiation feedback, we find that differences in ocean advection play the next largest role, explaining around 14% of the total intermodel variance in TPSW pattern. Of particular importance are differences in climatological zonal overturning circulation among the models. With the robust enhancement of ocean stratification across models, models with relatively strong climatological upwelling tend to have relatively weak SST warming in the eastern Pacific. Meanwhile, the pronounced intermodel differences in ocean overturning changes under global warming contribute little to uncertainty in the TPSW pattern. The intermodel differences in climatological zonal overturning are found to be associated with the intermodel spread in climatological SST. In most CMIP5 models, there is a common cold tongue bias associated with an overly strong overturning in the climatology simulation, implying a LaNiña-like bias in the TPSW pattern projected by the MME of the CMIP5 models. This provides further evidence for the projection that the TPSW pattern should be closer to an El Niño-like pattern than the MME projection.

  20. Fast 3-D large-scale gravity and magnetic modeling using unstructured grids and an adaptive multilevel fast multipole method

    Science.gov (United States)

    Ren, Zhengyong; Tang, Jingtian; Kalscheuer, Thomas; Maurer, Hansruedi

    2017-01-01

    A novel fast and accurate algorithm is developed for large-scale 3-D gravity and magnetic modeling problems. An unstructured grid discretization is used to approximate sources with arbitrary mass and magnetization distributions. A novel adaptive multilevel fast multipole (AMFM) method is developed to reduce the modeling time. An observation octree is constructed on a set of arbitrarily distributed observation sites, while a source octree is constructed on a source tetrahedral grid. A novel characteristic is the independence between the observation octree and the source octree, which simplifies the implementation of different survey configurations such as airborne and ground surveys. Two synthetic models, a cubic model and a half-space model with mountain-valley topography, are tested. As compared to analytical solutions of gravity and magnetic signals, excellent agreements of the solutions verify the accuracy of our AMFM algorithm. Finally, our AMFM method is used to calculate the terrain effect on an airborne gravity data set for a realistic topography model represented by a triangular surface retrieved from a digital elevation model. Using 16 threads, more than 5800 billion interactions between 1,002,001 observation points and 5,839,830 tetrahedral elements are computed in 453.6 s. A traditional first-order Gaussian quadrature approach requires 3.77 days. Hence, our new AMFM algorithm not only can quickly compute the gravity and magnetic signals for complicated problems but also can substantially accelerate the solution of 3-D inversion problems.

  1. A stochastic mathematical model to locate field hospitals under disruption uncertainty for large-scale disaster preparedness

    Directory of Open Access Journals (Sweden)

    Nezir Aydin

    2016-03-01

    Full Text Available In this study, we consider field hospital location decisions for emergency treatment points in response to large scale disasters. Specifically, we developed a two-stage stochastic model that determines the number and locations of field hospitals and the allocation of injured victims to these field hospitals. Our model considers the locations as well as the failings of the existing public hospitals while deciding on the location of field hospitals that are anticipated to be opened. The model that we developed is a variant of the P-median location model and it integrates capacity restrictions both on field hospitals that are planned to be opened and the disruptions that occur in existing public hospitals. We conducted experiments to demonstrate how the proposed model can be utilized in practice in a real life problem case scenario. Results show the effects of the failings of existing hospitals, the level of failure probability and the capacity of projected field hospitals to deal with the assessment of any given emergency treatment system’s performance. Crucially, it also specifically provides an assessment on the average distance within which a victim needs to be transferred in order to be treated properly and then from this assessment, the proportion of total satisfied demand is then calculated.

  2. Parallel Solvers for Finite-Difference Modeling of Large-Scale, High-Resolution Electromagnetic Problems in MRI

    Directory of Open Access Journals (Sweden)

    Hua Wang

    2008-01-01

    Full Text Available With the movement of magnetic resonance imaging (MRI technology towards higher field (and therefore frequency systems, the interaction of the fields generated by the system with patients, healthcare workers, and internally within the system is attracting more attention. Due to the complexity of the interactions, computational modeling plays an essential role in the analysis, design, and development of modern MRI systems. As a result of the large computational scale associated with most of the MRI models, numerical schemes that rely on a single computer processing unit often require a significant amount of memory and long computational times, which makes modeling of these problems quite inefficient. This paper presents dedicated message passing interface (MPI, OPENMP parallel computing solvers for finite-difference time-domain (FDTD, and quasistatic finite-difference (QSFD schemes. The FDTD and QSFD methods have been widely used to model/ analyze the induction of electric fields/ currents in voxel phantoms and MRI system components at high and low frequencies, respectively. The power of the optimized parallel computing architectures is illustrated by distinct, large-scale field calculation problems and shows significant computational advantages over conventional single processing platforms.

  3. Power spectrum of large-scale structure cosmological models in the framework of scalar-tensor theories

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-Meza, M A, E-mail: marioalberto.rodriguez@inin.gob.m [Instituto Avanzado de Cosmologia, IAC, Instituto Nacional de Investigaciones Nucleares, Col. Escandon, Apdo. Postal 18-1027, 11801 Mexico D.F. (Mexico)

    2010-05-01

    We study the large-scale structure formation in the Universe in the frame of scalar-tensor theories as an alternative to general relativity. We review briefly the Newtonian limit of non-minimally coupled scalar-tensor theories and the evolution equations of the N-body system that is appropriate to study large-scale structure formation in the Universe. We compute the power-spectrum of the universe at present epoch and show how the large-scale structure depends on the scalar field contribution.

  4. Conundrum of the Large Scale Streaming

    CERN Document Server

    Malm, T M

    1999-01-01

    The etiology of the large scale peculiar velocity (large scale streaming motion) of clusters would increasingly seem more tenuous, within the context of the gravitational instability hypothesis. Are there any alternative testable models possibly accounting for such large scale streaming of clusters?

  5. Assimilation of satellite data to optimize large-scale hydrological model parameters: a case study for the SWOT mission

    Science.gov (United States)

    Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.

    2014-11-01

    During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 x 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large-scale river routing models. The method consists in applying a data assimilation approach, the extended Kalman filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA (Interactions between Soil, Biosphere, and Atmosphere)-TRIP (Total Runoff Integrating Pathways) continental hydrologic system. Parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which leads to significant errors at reach and large scales. The current study focuses on the Niger Basin, a transboundary river. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an observing system simulation experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning coefficients are then supposed to be known and are used to generate synthetic SWOT observations over the period 2002-2003. The impact of the assimilation system on the Niger Basin hydrological cycle is then quantified. The optimization of the Manning coefficient using the EKF (extended Kalman filter) algorithm over an 18-month period led to a significant improvement of the river water levels. The relative bias of the water level is globally improved (a 30

  6. A statistical model for Windstorm Variability over the British Isles based on Large-scale Atmospheric and Oceanic Mechanisms

    Science.gov (United States)

    Kirchner-Bossi, Nicolas; Befort, Daniel J.; Wild, Simon B.; Ulbrich, Uwe; Leckebusch, Gregor C.

    2016-04-01

    Time-clustered winter storms are responsible for a majority of the wind-induced losses in Europe. Over last years, different atmospheric and oceanic large-scale mechanisms as the North Atlantic Oscillation (NAO) or the Meridional Overturning Circulation (MOC) have been proven to drive some significant portion of the windstorm variability over Europe. In this work we systematically investigate the influence of different large-scale natural variability modes: more than 20 indices related to those mechanisms with proven or potential influence on the windstorm frequency variability over Europe - mostly SST- or pressure-based - are derived by means of ECMWF ERA-20C reanalysis during the last century (1902-2009), and compared to the windstorm variability for the European winter (DJF). Windstorms are defined and tracked as in Leckebusch et al. (2008). The derived indices are then employed to develop a statistical procedure including a stepwise Multiple Linear Regression (MLR) and an Artificial Neural Network (ANN), aiming to hindcast the inter-annual (DJF) regional windstorm frequency variability in a case study for the British Isles. This case study reveals 13 indices with a statistically significant coupling with seasonal windstorm counts. The Scandinavian Pattern (SCA) showed the strongest correlation (0.61), followed by the NAO (0.48) and the Polar/Eurasia Pattern (0.46). The obtained indices (standard-normalised) are selected as predictors for a windstorm variability hindcast model applied for the British Isles. First, a stepwise linear regression is performed, to identify which mechanisms can explain windstorm variability best. Finally, the indices retained by the stepwise regression are used to develop a multlayer perceptron-based ANN that hindcasted seasonal windstorm frequency and clustering. Eight indices (SCA, NAO, EA, PDO, W.NAtl.SST, AMO (unsmoothed), EA/WR and Trop.N.Atl SST) are retained by the stepwise regression. Among them, SCA showed the highest linear

  7. New Techniques Used in Modeling the 2017 Total Solar Eclipse: Energizing and Heating the Large-Scale Corona

    Science.gov (United States)

    Downs, Cooper; Mikic, Zoran; Linker, Jon A.; Caplan, Ronald M.; Lionello, Roberto; Torok, Tibor; Titov, Viacheslav; Riley, Pete; Mackay, Duncan; Upton, Lisa

    2017-08-01

    Over the past two decades, our group has used a magnetohydrodynamic (MHD) model of the corona to predict the appearance of total solar eclipses. In this presentation we detail recent innovations and new techniques applied to our prediction model for the August 21, 2017 total solar eclipse. First, we have developed a method for capturing the large-scale energized fields typical of the corona, namely the sheared/twisted fields built up through long-term processes of differential rotation and flux-emergence/cancellation. Using inferences of the location and chirality of filament channels (deduced from a magnetofrictional model driven by the evolving photospheric field produced by the Advective Flux Transport model), we tailor a customized boundary electric field profile that will emerge shear along the desired portions of polarity inversion lines (PILs) and cancel flux to create long twisted flux systems low in the corona. This method has the potential to improve the morphological shape of streamers in the low solar corona. Second, we apply, for the first time in our eclipse prediction simulations, a new wave-turbulence-dissipation (WTD) based model for coronal heating. This model has substantially fewer free parameters than previous empirical heating models, but is inherently sensitive to the 3D geometry and connectivity of the coronal field---a key property for modeling/predicting the thermal-magnetic structure of the solar corona. Overall, we will examine the effect of these considerations on white-light and EUV observables from the simulations, and present them in the context of our final 2017 eclipse prediction model.Research supported by NASA's Heliophysics Supporting Research and Living With a Star Programs.

  8. Thin power law film flow down an inclined plane: consistent shallow water models and stability under large scale perturbations

    CERN Document Server

    Noble, Pascal

    2012-01-01

    In this paper we derive consistent shallow water equations for thin films of power law fluids down an incline. These models account for the streamwise diffusion of momentum which is important to describe accurately the full dynamic of the thin film flows when instabilities like roll-waves arise. These models are validated through a comparison with Orr Sommerfeld equations for large scale perturbations. We only consider laminar flow for which the boundary layer issued from the interaction of the flow with the bottom surface has an influence all over the transverse direction to the flow. In this case the concept itself of thin film and its relation with long wave asymptotic leads naturally to flow conditions around a uniform free surface Poiseuille flow. The apparent viscosity diverges at the free surface which, in turn, introduces a singularity in the formulation of the Orr-Sommerfeld equations and in the derivation of shallow water models. We remove this singularity by introducing a weaker formulation of Cauc...

  9. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Directory of Open Access Journals (Sweden)

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  10. Systems Perturbation Analysis of a Large-Scale Signal Transduction Model Reveals Potentially Influential Candidates for Cancer Therapeutics

    Science.gov (United States)

    Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš

    2016-01-01

    Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model’s components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis

  11. A theoretical model for the evolution of two-dimensional large-scale coherent structures in a mixing layer

    Institute of Scientific and Technical Information of China (English)

    周恒; 马良

    1995-01-01

    By a proper combination of the modified weakly nonlinear theory of hydrodynamic stability and the energy method, the spatial evolution of the large-scale coherent structures in a mixing layer has been calculated. The results are satisfactory.

  12. Study of materials and machines for 3D printed large-scale, flexible electronic structures using fused deposition modeling

    Science.gov (United States)

    Hwang, Seyeon

    The 3 dimensional printing (3DP), called to additive manufacturing (AM) or rapid prototyping (RP), is emerged to revolutionize manufacturing and completely transform how products are designed and fabricated. A great deal of research activities have been carried out to apply this new technology to a variety of fields. In spite of many endeavors, much more research is still required to perfect the processes of the 3D printing techniques especially in the area of the large-scale additive manufacturing and flexible printed electronics. The principles of various 3D printing processes are briefly outlined in the Introduction Section. New types of thermoplastic polymer composites aiming to specified functional applications are also introduced in this section. Chapter 2 shows studies about the metal/polymer composite filaments for fused deposition modeling (FDM) process. Various metal particles, copper and iron particles, are added into thermoplastics polymer matrices as the reinforcement filler. The thermo-mechanical properties, such as thermal conductivity, hardness, tensile strength, and fracture mechanism, of composites are tested to figure out the effects of metal fillers on 3D printed composite structures for the large-scale printing process. In Chapter 3, carbon/polymer composite filaments are developed by a simple mechanical blending process with an aim of fabricating the flexible 3D printed electronics as a single structure. Various types of carbon particles consisting of multi-wall carbon nanotube (MWCNT), conductive carbon black (CCB), and graphite are used as the conductive fillers to provide the thermoplastic polyurethane (TPU) with improved electrical conductivity. The mechanical behavior and conduction mechanisms of the developed composite materials are observed in terms of the loading amount of carbon fillers in this section. Finally, the prototype flexible electronics are modeled and manufactured by the FDM process using Carbon/TPU composite filaments and

  13. Context-dependent encoding of fear and extinction memories in a large-scale network model of the basal amygdala.

    Directory of Open Access Journals (Sweden)

    Ioannis Vlachos

    2011-03-01

    Full Text Available The basal nucleus of the amygdala (BA is involved in the formation of context-dependent conditioned fear and extinction memories. To understand the underlying neural mechanisms we developed a large-scale neuron network model of the BA, composed of excitatory and inhibitory leaky-integrate-and-fire neurons. Excitatory BA neurons received conditioned stimulus (CS-related input from the adjacent lateral nucleus (LA and contextual input from the hippocampus or medial prefrontal cortex (mPFC. We implemented a plasticity mechanism according to which CS and contextual synapses were potentiated if CS and contextual inputs temporally coincided on the afferents of the excitatory neurons. Our simulations revealed a differential recruitment of two distinct subpopulations of BA neurons during conditioning and extinction, mimicking the activation of experimentally observed cell populations. We propose that these two subgroups encode contextual specificity of fear and extinction memories, respectively. Mutual competition between them, mediated by feedback inhibition and driven by contextual inputs, regulates the activity in the central amygdala (CEA thereby controlling amygdala output and fear behavior. The model makes multiple testable predictions that may advance our understanding of fear and extinction memories.

  14. Large Scale Earth's Bow Shock with Northern IMF as simulated by PIC code in parallel with MHD model

    CERN Document Server

    Baraka, Suleiman M

    2016-01-01

    In this paper, we propose a 3D kinetic model (Particle-in-Cell PIC ) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled Solar wind ( SW ) and ( IMF ) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ~14.8 RE along the Sun-Earth line, and ~ 29 RE on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted...

  15. (Studies of ocean predictability at decade to century time scales using a global ocean general circulation model in a parallel competing environment). [Large Scale Geostrophic Model

    Energy Technology Data Exchange (ETDEWEB)

    1992-03-10

    The first phase of the proposed work is largely completed on schedule. Scientists at the San Diego Supercomputer Center (SDSC) succeeded in putting a version of the Hamburg isopycnal coordinate ocean model (OPYC) onto the INTEL parallel computer. Due to the slow run speeds of the OPYC on the parallel machine, another ocean is being model used during the first part of phase 2. The model chosen is the Large Scale Geostrophic (LSG) model form the Max Planck Institute.

  16. Application of Large-Scale, Multi-Resolution Watershed Modeling Framework Using the Hydrologic and Water Quality System (HAWQS

    Directory of Open Access Journals (Sweden)

    Haw Yen

    2016-04-01

    Full Text Available In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources allocation, sediment transport, and pollution control. Among commonly adopted models, the Soil and Water Assessment Tool (SWAT has been demonstrated to provide superior performance with a large amount of referencing databases. However, it is cumbersome to perform tedious initialization steps such as preparing inputs and developing a model with each changing targeted study area. In this study, the Hydrologic and Water Quality System (HAWQS is introduced to serve as a national-scale Decision Support System (DSS to conduct challenging watershed modeling tasks. HAWQS is a web-based DSS developed and maintained by Texas A & M University, and supported by the U.S. Environmental Protection Agency. Three different spatial resolutions of Hydrologic Unit Code (HUC8, HUC10, and HUC12 and three temporal scales (time steps in daily/monthly/annual are available as alternatives for general users. In addition, users can specify preferred values of model parameters instead of using the pre-defined sets. With the aid of HAWQS, users can generate a preliminarily calibrated SWAT project within a few minutes by only providing the ending HUC number of the targeted watershed and the simulation period. In the case study, HAWQS was implemented on the Illinois River Basin, USA, with graphical demonstrations and associated analytical results. Scientists and/or decision-makers can take advantage of the HAWQS framework while conducting relevant topics or policies in the future.

  17. Evaluation of large-scale meteorological patterns associated with temperature extremes in the NARCCAP regional climate model simulations

    Science.gov (United States)

    Loikith, Paul C.; Waliser, Duane E.; Lee, Huikyo; Neelin, J. David; Lintner, Benjamin R.; McGinnis, Seth; Mearns, Linda O.; Kim, Jinwon

    2015-12-01

    Large-scale meteorological patterns (LSMPs) associated with temperature extremes are evaluated in a suite of regional climate model (RCM) simulations contributing to the North American Regional Climate Change Assessment Program. LSMPs are characterized through composites of surface air temperature, sea level pressure, and 500 hPa geopotential height anomalies concurrent with extreme temperature days. Six of the seventeen RCM simulations are driven by boundary conditions from reanalysis while the other eleven are driven by one of four global climate models (GCMs). Four illustrative case studies are analyzed in detail. Model fidelity in LSMP spatial representation is high for cold winter extremes near Chicago. Winter warm extremes are captured by most RCMs in northern California, with some notable exceptions. Model fidelity is lower for cool summer days near Houston and extreme summer heat events in the Ohio Valley. Physical interpretation of these patterns and identification of well-simulated cases, such as for Chicago, boosts confidence in the ability of these models to simulate days in the tails of the temperature distribution. Results appear consistent with the expectation that the ability of an RCM to reproduce a realistically shaped frequency distribution for temperature, especially at the tails, is related to its fidelity in simulating LMSPs. Each ensemble member is ranked for its ability to reproduce LSMPs associated with observed warm and cold extremes, identifying systematically high performing RCMs and the GCMs that provide superior boundary forcing. The methodology developed here provides a framework for identifying regions where further process-based evaluation would improve the understanding of simulation error and help guide future model improvement and downscaling efforts.

  18. The benefits of using remotely sensed soil moisture in parameter identification of large-scale hydrological models

    Science.gov (United States)

    Karssenberg, D.; Wanders, N.; de Roo, A.; de Jong, S.; Bierkens, M. F.

    2013-12-01

    Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system that is not directly linked to discharge, in particular the unsaturated zone, remains uncalibrated, or might be modified unrealistically. Soil moisture observations from satellites have the potential to fill this gap, as these provide the closest thing to a direct measurement of the state of the unsaturated zone, and thus are potentially useful in calibrating unsaturated zone model parameters. This is expected to result in a better identification of the complete hydrological system, potentially leading to improved forecasts of the hydrograph as well. Here we evaluate this added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: 1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? 2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to approaches that calibrate only with discharge, such that this leads to improved forecasts of soil moisture content and discharge as well? To answer these questions we use a dual state and parameter ensemble Kalman filter to calibrate the hydrological model LISFLOOD for the Upper Danube area. Calibration is done with discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS and ASCAT. Four scenarios are studied: no calibration (expert knowledge), calibration on discharge, calibration on remote sensing data (three satellites) and calibration on both discharge and remote sensing data. Using a split-sample approach, the model is calibrated for a period of 2 years and validated for the calibrated model parameters on a validation period of 10 years. Results show that calibration with discharge data improves the estimation of groundwater parameters (e.g., groundwater reservoir constant) and

  19. Towards a Quantitative Use of Satellite Remote Sensing in Crop Growth Models for Large Scale Agricultural Production Estimate (Invited)

    Science.gov (United States)

    Defourny, P.

    2013-12-01

    such the Green Area Index (GAI), fAPAR and fcover usually retrieved from MODIS, MERIS, SPOT-Vegetation described the quality of the green vegetation development. The GLOBAM (Belgium) and EU FP-7 MOCCCASIN projects (Russia) improved the standard products and were demonstrated over large scale. The GAI retrieved from MODIS time series using a purity index criterion depicted successfully the inter-annual variability. Furthermore, the quantitative assimilation of these GAI time series into a crop growth model improved the yield estimate over years. These results showed that the GAI assimilation works best at the district or provincial level. In the context of the GEO Ag., the Joint Experiment of Crop Assessment and Monitoring (JECAM) was designed to enable the global agricultural monitoring community to compare such methods and results over a variety of regional cropping systems. For a network of test sites around the world, satellite and field measurements are currently collected and will be made available for collaborative effort. This experiment should facilitate international standards for data products and reporting, eventually supporting the development of a global system of systems for agricultural crop assessment and monitoring.

  20. Model Research of Gas Emissions From Lignite and Biomass Co-Combustion in a Large Scale CFB Boiler

    Directory of Open Access Journals (Sweden)

    Krzywański Jarosław

    2014-06-01

    Full Text Available The paper is focused on the idea of a combustion modelling of a large-scale circulating fluidised bed boiler (CFB during coal and biomass co-combustion. Numerical computation results for three solid biomass fuels co-combustion with lignite are presented in the paper. The results of the calculation showed that in previously established kinetics equations for coal combustion, some reactions had to be modified as the combustion conditions changed with the fuel blend composition. Obtained CO2, CO, SO2 and NOx emissions are located in borders of ± 20% in the relationship to the experimental data. Experimental data was obtained for forest biomass, sunflower husk, willow and lignite cocombustion tests carried out on the atmospheric 261 MWe COMPACT CFB boiler operated in PGE Turow Power Station in Poland. The energy fraction of biomass in fuel blend was: 7%wt, 10%wt and 15%wt. The measured emissions of CO, SO2 and NOx (i.e. NO + NO2 were also shown in the paper. For all types of biomass added to the fuel blends the emission of the gaseous pollutants was lower than that for coal combustion.

  1. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  2. Large-scale dynamical influence of a gravity wave generated over the Antarctic Peninsula – regional modelling and budget analysis

    Directory of Open Access Journals (Sweden)

    JOEL Arnault

    2013-03-01

    Full Text Available The case study of a mountain wave triggered by the Antarctic Peninsula on 6 October 2005, which has already been documented in the literature, is chosen here to quantify the associated gravity wave forcing on the large-scale flow, with a budget analysis of the horizontal wind components and horizontal kinetic energy. In particular, a numerical simulation using the Weather Research and Forecasting (WRF model is compared to a control simulation with flat orography to separate the contribution of the mountain wave from that of other synoptic processes of non-orographic origin. The so-called differential budgets of horizontal wind components and horizontal kinetic energy (after subtracting the results from the simulation without orography are then averaged horizontally and vertically in the inner domain of the simulation to quantify the mountain wave dynamical influence at this scale. This allows for a quantitative analysis of the simulated mountain wave's dynamical influence, including the orographically induced pressure drag, the counterbalancing wave-induced vertical transport of momentum from the flow aloft, the momentum and energy exchanges with the outer flow at the lateral and upper boundaries, the effect of turbulent mixing, the dynamics associated with geostrophic re-adjustment of the inner flow, the deceleration of the inner flow, the secondary generation of an inertia–gravity wave and the so-called baroclinic conversion of energy between potential energy and kinetic energy.

  3. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  4. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESULT...

  5. Large-scale Watershed Modeling: NHDPlus Resolution with Achievable Conservation Scenarios in the Western Lake Erie Basin

    Science.gov (United States)

    Yen, H.; White, M. J.; Arnold, J. G.; Keitzer, S. C.; Johnson, M. V. V.; Atwood, J. D.; Daggupati, P.; Herbert, M. E.; Sowa, S. P.; Ludsin, S.; Robertson, D. M.; Srinivasan, R.; Rewa, C. A.

    2016-12-01

    By the substantial improvement of computer technology, large-scale watershed modeling has become practically feasible in conducting detailed investigations of hydrologic, sediment, and nutrient processes. In the Western Lake Erie Basin (WLEB), water quality issues caused by anthropogenic activities are not just interesting research subjects but, have implications related to human health and welfare, as well as ecological integrity, resistance, and resilience. In this study, the Soil and Water Assessment Tool (SWAT) and the finest resolution stream network, NHDPlus, were implemented on the WLEB to examine the interactions between achievable conservation scenarios with corresponding additional projected costs. During the calibration/validation processes, both hard (temporal) and soft (non-temporal) data were used to ensure the modeling outputs are coherent with actual watershed behavior. The results showed that widespread adoption of conservation practices intended to provide erosion control could deliver average reductions of sediment and nutrients without additional nutrient management changes. On the other hand, responses of nitrate (NO3) and dissolved inorganic phosphorus (DIP) dynamics may be different than responses of total nitrogen and total phosphorus dynamics under the same conservation practice. Model results also implied that fewer financial resources are required to achieve conservation goals if the goal is to achieve reductions in targeted watershed outputs (ex. NO3 or DIP) rather than aggregated outputs (ex. total nitrogen or total phosphorus). In addition, it was found that the model's capacity to simulate seasonal effects and responses to changing conservation adoption on a seasonal basis could provide a useful index to help alleviate additional cost through temporal targeting of conservation practices. Scientists, engineers, and stakeholders can take advantage of the work performed in this study as essential information while conducting policy

  6. Observed reductions in Schistosoma mansoni transmission from large-scale administration of praziquantel in Uganda: a mathematical modelling study.

    Directory of Open Access Journals (Sweden)

    Michael D French

    Full Text Available BACKGROUND: To date schistosomiasis control programmes based on chemotherapy have largely aimed at controlling morbidity in treated individuals rather than at suppressing transmission. In this study, a mathematical modelling approach was used to estimate reductions in the rate of Schistosoma mansoni reinfection following annual mass drug administration (MDA with praziquantel in Uganda over four years (2003-2006. In doing this we aim to elucidate the benefits of MDA in reducing community transmission. METHODS: Age-structured models were fitted to a longitudinal cohort followed up across successive rounds of annual treatment for four years (Baseline: 2003, TREATMENT: 2004-2006; n = 1,764. Instead of modelling contamination, infection and immunity processes separately, these functions were combined in order to estimate a composite force of infection (FOI, i.e., the rate of parasite acquisition by hosts. RESULTS: MDA achieved substantial and statistically significant reductions in the FOI following one round of treatment in areas of low baseline infection intensity, and following two rounds in areas with high and medium intensities. In all areas, the FOI remained suppressed following a third round of treatment. CONCLUSIONS/SIGNIFICANCE: This study represents one of the first attempts to monitor reductions in the FOI within a large-scale MDA schistosomiasis morbidity control programme in sub-Saharan Africa. The results indicate that the Schistosomiasis Control Initiative, as a model for other MDA programmes, is likely exerting a significant ancillary impact on reducing transmission within the community, and may provide health benefits to those who do not receive treatment. The results obtained will have implications for evaluating the cost-effectiveness of schistosomiasis control programmes and the design of monitoring and evaluation approaches in general.

  7. Investigation of Prediction Accuracy, Sensitivity, and Parameter Stability of Large-Scale Propagation Path Loss Models for 5G Wireless Communications

    DEFF Research Database (Denmark)

    Sun, Shu; Rappaport, Theodore S.; Thomas, Timothy

    2016-01-01

    This paper compares three candidate large-scale propagation path loss models for use over the entire microwave and millimeter-wave (mmWave) radio spectrum: the alpha–beta–gamma (ABG) model, the close-in (CI) free-space reference distance model, and the CI model with a frequency-weighted path loss...

  8. Conceptual Numerical Modeling of Large-Scale Footwall Behavior at the Kiirunavaara Mine, and Implications for Deformation Monitoring

    Science.gov (United States)

    Svartsjaern, M.; Saiang, D.; Nordlund, E.; Eitzenberger, A.

    2016-03-01

    Over the last 30 years, the Kiirunavaara mine has experienced a slow but progressive fracturing and movement in the footwall rock mass, which is directly related to the sublevel caving (SLC) method utilized by Luossavaara-Kiirunavaara Aktiebolag (LKAB). As part of an ongoing work, this paper focuses on describing and explaining a likely evolution path of large-scale fracturing in the Kiirunavaara footwall. The trace of this fracturing was based on a series of damage mapping campaigns carried out over the last 2 years, accompanied by numerical modeling. Data collected from the damage mapping between mine levels 320 and 907 m was used to create a 3D surface representing a conceptual boundary for the extent of the damaged volume. The extent boundary surface was used as the basis for calibrating conceptual numerical models created in UDEC. The mapping data, in combination with the numerical models, indicated a plausible evolution path of the footwall fracturing that was subsequently described. Between levels 320 and 740 m, the extent of fracturing into the footwall appears to be controlled by natural pre-existing discontinuities, while below 740 m, there are indications of a curved shear or step-path failure. The step-path is hypothesized to be activated by rock mass heave into the SLC zone above the current extraction level. Above the 320 m level, the fracturing seems to intersect a subvertical structure that daylights in the old open pit slope. Identification of these probable damage mechanisms was an important step in order to determine the requirements for a monitoring system for tracking footwall damage. This paper describes the background work for the design of the system currently being installed.

  9. Model based multivariable controller for large scale compression stations. Design and experimental validation on the LHC 18KW cryorefrigerator

    Science.gov (United States)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick; Bradu, Benjamin

    2014-01-01

    In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  10. Self-sustaining non-repetitive activity in a large scale neuronal-level model of the hippocampal circuit.

    Science.gov (United States)

    Scorcioni, Ruggero; Hamilton, David J; Ascoli, Giorgio A

    2008-10-01

    to reproduce the full behavioral complexity of the large-scale model. Thus network size, cell class diversity, and connectivity details may all be critical to generate self-sustained non-repetitive activity patterns.

  11. Business Model for the Security of a Large-Scale PACS, Compliance with ISO/27002:2013 Standard.

    Science.gov (United States)

    Gutiérrez-Martínez, Josefina; Núñez-Gaona, Marco Antonio; Aguirre-Meneses, Heriberto

    2015-08-01

    Data security is a critical issue in an organization; a proper information security management (ISM) is an ongoing process that seeks to build and maintain programs, policies, and controls for protecting information. A hospital is one of the most complex organizations, where patient information has not only legal and economic implications but, more importantly, an impact on the patient's health. Imaging studies include medical images, patient identification data, and proprietary information of the study; these data are contained in the storage device of a PACS. This system must preserve the confidentiality, integrity, and availability of patient information. There are techniques such as firewalls, encryption, and data encapsulation that contribute to the protection of information. In addition, the Digital Imaging and Communications in Medicine (DICOM) standard and the requirements of the Health Insurance Portability and Accountability Act (HIPAA) regulations are also used to protect the patient clinical data. However, these techniques are not systematically applied to the picture and archiving and communication system (PACS) in most cases and are not sufficient to ensure the integrity of the images and associated data during transmission. The ISO/IEC 27001:2013 standard has been developed to improve the ISM. Currently, health institutions lack effective ISM processes that enable reliable interorganizational activities. In this paper, we present a business model that accomplishes the controls of ISO/IEC 27002:2013 standard and criteria of security and privacy from DICOM and HIPAA to improve the ISM of a large-scale PACS. The methodology associated with the model can monitor the flow of data in a PACS, facilitating the detection of unauthorized access to images and other abnormal activities.

  12. Testing gravity on Large Scales

    OpenAIRE

    Raccanelli Alvise

    2013-01-01

    We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep...

  13. System-Level Modeling and Synthesis Techniques for Flow-Based Microfluidic Very Large Scale Integration Biochips

    DEFF Research Database (Denmark)

    Minhass, Wajid Hassan

    high densities, e.g., 1 million valves per cm2. By combining these valves, more complex units such as mixers, switches, multiplexers can be built up and the technology is therefore referred to as microfluidic Very Large Scale Integration (mVLSI). The manufacturing technology for the mVLSI biochips has...

  14. Techno-economic Modeling of the Integration of 20% Wind and Large-scale Energy Storage in ERCOT by 2030

    Energy Technology Data Exchange (ETDEWEB)

    Baldick, Ross; Webber, Michael; King, Carey; Garrison, Jared; Cohen, Stuart; Lee, Duehee

    2012-12-21

    This study's objective is to examine interrelated technical and economic avenues for the Electric Reliability Council of Texas (ERCOT) grid to incorporate up to and over 20% wind generation by 2030. Our specific interests are to look at the factors that will affect the implementation of both high level of wind power penetration (> 20% generation) and installation of large scale storage.

  15. Ecological niche modeling as a new paradigm for large-scale investigations of diversity and distribution of birds

    Science.gov (United States)

    A. Townsend Peterson; Daniel A. Kluza

    2005-01-01

    Large-scale assessments of the distribution and diversity of birds have been challenged by the need for a robust methodology for summarizing or predicting species' geographic distributions (e.g. Beard et al. 1999, Manel et al. 1999, Saveraid et al. 2001). Methodologies used in such studies have at times been inappropriate, or even more frequently limited in their...

  16. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  17. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  18. Large-scale determinants of diversity across Spanish forest habitats: accounting for model uncertainty in compositional and structural indicators

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Quller, E.; Torras, O.; Alberdi, I.; Solana, J.; Saura, S.

    2011-07-01

    An integral understanding of forest biodiversity requires the exploration of the many aspects it comprises and of the numerous potential determinants of their distribution. The landscape ecological approach provides a necessary complement to conventional local studies that focus on individual plots or forest ownerships. However, most previous landscape studies used equally-sized cells as units of analysis to identify the factors affecting forest biodiversity distribution. Stratification of the analysis by habitats with a relatively homogeneous forest composition might be more adequate to capture the underlying patterns associated to the formation and development of a particular ensemble of interacting forest species. Here we used a landscape perspective in order to improve our understanding on the influence of large-scale explanatory factors on forest biodiversity indicators in Spanish habitats, covering a wide latitudinal and attitudinal range. We considered six forest biodiversity indicators estimated from more than 30,000 field plots in the Spanish national forest inventory, distributed in 213 forest habitats over 16 Spanish provinces. We explored biodiversity response to various environmental (climate and topography) and landscape configuration (fragmentation and shape complexity) variables through multiple linear regression models (built and assessed through the Akaike Information Criterion). In particular, we took into account the inherent model uncertainty when dealing with a complex and large set of variables, and considered different plausible models and their probability of being the best candidate for the observed data. Our results showed that compositional indicators (species richness and diversity) were mostly explained by environmental factors. Models for structural indicators (standing deadwood and stand complexity) had the worst fits and selection uncertainties, but did show significant associations with some configuration metrics. In general

  19. Which spatial discretization for distributed hydrological models? Proposition of a methodology and illustration for medium to large-scale catchments

    Directory of Open Access Journals (Sweden)

    J. Dehotin

    2008-05-01

    modelling units (third level of discretization.

    The first part of the paper presents a review about catchment discretization in hydrological models from which we derived the principles of our general methodology. The second part of the paper focuses on the derivation of hydro-landscape units for medium to large scale catchments. For this sub-catchment discretization, we propose the use of principles borrowed from landscape classification. These principles are independent of the catchment size. They allow retaining suitable features required in the catchment description in order to fulfil a specific modelling objective. The method leads to unstructured and homogeneous areas within the sub-catchments, which can be used to derive modelling meshes. It avoids map smoothing by suppressing the smallest units, the role of which can be very important in hydrology, and provides a confidence map (the distance map for the classification. The confidence map can be used for further uncertainty analysis of modelling results. The final discretization remains consistent with the resolution of input data and that of the source maps. The last part of the paper illustrates the method using available data for the upper Saône catchment in France. The interest of the method for an efficient representation of landscape heterogeneity is illustrated by a comparison with more traditional mapping approaches. Examples of possible models, which can be built on this spatial discretization, are finally given as perspectives for the work.

  20. Intelligent Adjustment of Printhead Driving Waveform Parameters for 3D Electronic Printing

    Directory of Open Access Journals (Sweden)

    Lin Na

    2017-01-01

    Full Text Available In practical applications of 3D electronic printing, a major challenge is to adjust the printhead for a high print resolution and accuracy. However, an exhausting manual selective process inevitably wastes a lot of time. Therefore, in this paper, we proposed a new intelligent adjustment method, which adopts artificial bee colony algorithm to optimize the printhead driving waveform parameters for getting the desired printhead state. Experimental results show that this method can quickly and accuracy find out the suitable combination of driving waveform parameters to meet the needs of applications.

  1. The "AQUASCOPE" simplified model for predicting 89, 90Sr, 131l and 134, 137Cs in surface waters after a large-scale radioactive fallout

    NARCIS (Netherlands)

    Smith, J.T.; Belova, N.V.; Bulgakov, A.A.; Comans, R.N.J.; Konoplev, A.V.; Kudelsky, A.V.; Madruga, M.J.; Voitsekhovitch, O.V.; Zibolt, G.

    2005-01-01

    Simplified dynamic models have been developed for predicting the concentrations of radiocesium, radiostrontium, and 131I in surface waters and freshwater fish following a large-scale radioactive fallout. The models are intended to give averaged estimates for radionuclides in water bodies and in fish

  2. Water consumption and allocation strategies along the river oases of Tarim River based on large-scale hydrological modelling

    Science.gov (United States)

    Yu, Yang; Disse, Markus; Yu, Ruide

    2016-04-01

    With the mainstream of 1,321km and located in an arid area in northwest China, the Tarim River is China's longest inland river. The Tarim basin on the northern edge of the Taklamakan desert is an extremely arid region. In this region, agricultural water consumption and allocation management are crucial to address the conflicts among irrigation water users from upstream to downstream. Since 2011, the German Ministry of Science and Education BMBF established the Sino-German SuMaRiO project, for the sustainable management of river oases along the Tarim River. The project aims to contribute to a sustainable land management which explicitly takes into account ecosystem functions and ecosystem services. SuMaRiO will identify realizable management strategies, considering social, economic and ecological criteria. This will have positive effects for nearly 10 million inhabitants of different ethnic groups. The modelling of water consumption and allocation strategies is a core block in the SuMaRiO cluster. A large-scale hydrological model (MIKE HYDRO Basin) was established for the purpose of sustainable agricultural water management in the main stem Tarim River. MIKE HYDRO Basin is an integrated, multipurpose, map-based decision support tool for river basin analysis, planning and management. It provides detailed simulation results concerning water resources and land use in the catchment areas of the river. Calibration data and future predictions based on large amount of data was acquired. The results of model calibration indicated a close correlation between simulated and observed values. Scenarios with the change on irrigation strategies and land use distributions were investigated. Irrigation scenarios revealed that the available irrigation water has significant and varying effects on the yields of different crops. Irrigation water saving could reach up to 40% in the water-saving irrigation scenario. Land use scenarios illustrated that an increase of farmland area in the

  3. Modal analysis of measurements from a large-scale VIV model test of a riser in linearly sheared flow

    Science.gov (United States)

    Lie, H.; Kaasen, K. E.

    2006-05-01

    Large-scale model testing of a tensioned steel riser in well-defined sheared current was performed at Hanøytangen outside Bergen, Norway in 1997. The length of the model was 90 m and the diameter was 3 cm. The aim of the present work is to look into this information and try to improve the understanding of vortex-induced vibrations (VIV) for cases with very high order of responding modes, and in particular to study if and under which circumstances the riser motions would be single-mode or multi-mode. The measurement system consisted of 29 biaxial gauges for bending moment. The signals are processed to yield curvature and displacement and further to identify modes of vibration. A modal approach is used successfully employing a combination of signal filtering and least-squares fitting of precalculated mode-shapes. As a part of the modal analysis, it is demonstrated that the equally spaced instrumentation limited the maximum mode number to be extracted to be equal to the number of instrumentation locations. This imposed a constraint on the analysis of in-line (IL) vibration, which occurs at higher frequencies and involves higher modes than cross-flow (CF). The analysis has shown that in general the riser response was irregular (i.e. broad-banded) and that the degree of irregularity increases with the flow speed. In some tests distinct spectral peaks could be seen, corresponding to a dominating mode. No occurrences of single-mode (lock-in) were seen. The IL response is more broad-banded than the CF response and contains higher frequencies. The average value of the displacement r.m.s over the length of the riser is computed to indicate the magnitude of VIV motion during one test. In the CF direction the average displacement is typically 1/4 of the diameter, almost independent of the flow speed. For the IL direction the values are in the range 0.05 0.08 of the diameter. The peak frequency taken from the spectra of the CF displacement at riser midpoint show approximately

  4. Assimilation of satellite data to optimize large scale hydrological model parameters: a case study for the SWOT mission

    Directory of Open Access Journals (Sweden)

    V. Pedinotti

    2014-04-01

    Full Text Available During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT satellite mission will deliver maps of water surface elevation (WSE with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 m × 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large scale river routing models which are typically employed in Land Surface Models (LSM for global scale applications. The method consists in applying a data assimilation approach, the Extended Kalman Filter (EKF algorithm, to correct the Manning roughness coefficients of the ISBA-TRIP Continental Hydrologic System. Indeed, parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which might have locally significant errors. The current study focuses on the Niger basin, a trans-boundary river, which is the main source of fresh water for all the riparian countries. In addition, geopolitical issues in this region can restrict the exchange of hydrological data, so that SWOT should help improve this situation by making hydrological data freely available. In a previous study, the model was first evaluated against in-situ and satellite derived data sets within the framework of the international African Monsoon Multi-disciplinary Analysis (AMMA project. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an Observing System Simulation Experiment (OSSE. It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true

  5. Assimilation of satellite data to optimize large scale hydrological model parameters: a case study for the SWOT mission

    Science.gov (United States)

    Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.

    2014-04-01

    During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 m × 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large scale river routing models which are typically employed in Land Surface Models (LSM) for global scale applications. The method consists in applying a data assimilation approach, the Extended Kalman Filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA-TRIP Continental Hydrologic System. Indeed, parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which might have locally significant errors. The current study focuses on the Niger basin, a trans-boundary river, which is the main source of fresh water for all the riparian countries. In addition, geopolitical issues in this region can restrict the exchange of hydrological data, so that SWOT should help improve this situation by making hydrological data freely available. In a previous study, the model was first evaluated against in-situ and satellite derived data sets within the framework of the international African Monsoon Multi-disciplinary Analysis (AMMA) project. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an Observing System Simulation Experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning

  6. Shoreline Response to Climate Change and Human Manipulations in a Model of Large-Scale Coastal Change

    Science.gov (United States)

    Slott, J. M.; Murray, A. B.; Valvo, L.; Ashton, A.

    2005-12-01

    ) show that large-scale coastal features (e.g. capes and cuspate spits) may self-organize as smaller coastal features grow and merge by interacting over large distances through wave shadowing. Our current work extends this model by including the effects of beach nourishment and seawalls. These simulations start with a cape-like shoreline, resembling the Carolina coastline, which we generated using the one-line model driven by the statistical average of 20 years of hindcast wave data measured off Cape Lookout, NC (WIS Station 509). In our experiments, we explored the effects of shoreline stabilization under four different wave climate scenarios: (a) unchanged, (b) increased winter storms, (c) increased tropical storms, and (d) decreased storminess. For each of these four scenarios, we ran three simulations: a control run with no shoreline stabilization, a run with a 10 km beach nourishment project, and a run with a 10 km seawall. We identified the effects of shoreline stabilization by comparing each of the latter two simulations to the control run. In each experiment, shoreline stabilization had a large effect on shoreline position--on the order of a few kilometers--within tens of kilometers of the stabilization area. We also saw sizable effects on adjacent capes nearly 100 kilometers away. Analysis of the simulations indicate that these distant impacts occurred because shoreline stabilization altered the extent to which the stabilized cape shadowed other parts of the coast. We thank the National Science Foundation and the Duke Center on Global Change for supporting our work.

  7. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  8. Expanded Large-Scale Forcing Properties Derived from the Multiscale Data Assimilation System and Its Application to Single-Column Models

    Science.gov (United States)

    Feng, S.; Li, Z.; Liu, Y.; Lin, W.; Toto, T.; Vogelmann, A. M.; Fridlind, A. M.

    2013-12-01

    We present an approach to derive large-scale forcing that is used to drive single-column models (SCMs) and cloud resolving models (CRMs)/large eddy simulation (LES) for evaluating fast physics parameterizations in climate models. The forcing fields are derived by use of a newly developed multi-scale data assimilation (MS-DA) system. This DA system is developed on top of the NCEP Gridpoint Statistical Interpolation (GSI) System and is implemented in the Weather Research and Forecasting (WRF) model at a cloud resolving resolution of 2 km. This approach has been applied to the generation of large scale forcing for a set of Intensive Operation Periods (IOPs) over the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plains (SGP) site. The dense ARM in-situ observations and high-resolution satellite data effectively constrain the WRF model. The evaluation shows that the derived forcing displays accuracies comparable to the existing continuous forcing product and, overall, a better dynamic consistency with observed cloud and precipitation. One important application of this approach is to derive large-scale hydrometeor forcing and multiscale forcing, which is not provided in the existing continuous forcing product. It is shown that the hydrometeor forcing poses an appreciable impact on cloud and precipitation fields in the single-column model simulations. The large-scale forcing exhibits a significant dependency on domain-size that represents SCM grid-sizes. Subgrid processes often contribute a significant component to the large-scale forcing, and this contribution is sensitive to the grid-size and cloud-regime.

  9. Diversity in the representation of large-scale circulation associated with ENSO-Indian summer monsoon teleconnections in CMIP5 models

    Science.gov (United States)

    Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.

    2017-03-01

    Realistic simulation of large-scale circulation patterns associated with El Niño-Southern Oscillation (ENSO) is vital in coupled models in order to represent teleconnections to different regions of globe. The diversity in representing large-scale circulation patterns associated with ENSO-Indian summer monsoon (ISM) teleconnections in 23 Coupled Model Intercomparison Project Phase 5 (CMIP5) models is examined. CMIP5 models have been classified into three groups based on the correlation between Niño3.4 sea surface temperature (SST) index and ISM rainfall anomalies, models in group 1 (G1) overestimated El Niño-ISM teleconections and group 3 (G3) models underestimated it, whereas these teleconnections are better represented in group 2 (G2) models. Results show that in G1 models, El Niño-induced Tropical Indian Ocean (TIO) SST anomalies are not well represented. Anomalous low-level anticyclonic circulation anomalies over the southeastern TIO and western subtropical northwest Pacific (WSNP) cyclonic circulation are shifted too far west to 60° E and 120° E, respectively. This bias in circulation patterns implies dry wind advection from extratropics/midlatitudes to Indian subcontinent. In addition to this, large-scale upper level convergence together with lower level divergence over ISM region corresponding to El Niño are stronger in G1 models than in observations. Thus, unrealistic shift in low-level circulation centers corroborated by upper level circulation changes are responsible for overestimation of ENSO-ISM teleconnections in G1 models. Warm Pacific SST anomalies associated with El Niño are shifted too far west in many G3 models unlike in the observations. Further large-scale circulation anomalies over the Pacific and ISM region are misrepresented during El Niño years in G3 models. Too strong upper-level convergence away from Indian subcontinent and too weak WSNP cyclonic circulation are prominent in most of G3 models in which ENSO-ISM teleconnections are

  10. Modeling and Coordinated Control Strategy of Large Scale Grid-Connected Wind/Photovoltaic/Energy Storage Hybrid Energy Conversion System

    OpenAIRE

    Lingguo Kong; Guowei Cai; Sidney Xue; Shaohua Li

    2015-01-01

    An AC-linked large scale wind/photovoltaic (PV)/energy storage (ES) hybrid energy conversion system for grid-connected application was proposed in this paper. Wind energy conversion system (WECS) and PV generation system are the primary power sources of the hybrid system. The ES system, including battery and fuel cell (FC), is used as a backup and a power regulation unit to ensure continuous power supply and to take care of the intermittent nature of wind and photovoltaic resources. Static sy...

  11. Alternative projections of the impacts of private investment on southern forests: a comparison of two large-scale forest sector models of the United States.

    Science.gov (United States)

    Ralph Alig; Darius Adams; John Mills; Richard Haynes; Peter Ince; Robert. Moulton

    2001-01-01

    The TAMM/NAPAP/ATLAS/AREACHANGE(TNAA) system and the Forest and Agriculture Sector Optimization Model (FASOM) are two large-scale forestry sector modeling systems that have been employed to analyze the U.S. forest resource situation. The TNAA system of static, spatial equilibrium models has been applied to make SO-year projections of the U.S. forest sector for more...

  12. Modeling and Coordinated Control Strategy of Large Scale Grid-Connected Wind/Photovoltaic/Energy Storage Hybrid Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Lingguo Kong

    2015-01-01

    Full Text Available An AC-linked large scale wind/photovoltaic (PV/energy storage (ES hybrid energy conversion system for grid-connected application was proposed in this paper. Wind energy conversion system (WECS and PV generation system are the primary power sources of the hybrid system. The ES system, including battery and fuel cell (FC, is used as a backup and a power regulation unit to ensure continuous power supply and to take care of the intermittent nature of wind and photovoltaic resources. Static synchronous compensator (STATCOM is employed to support the AC-linked bus voltage and improve low voltage ride through (LVRT capability of the proposed system. An overall power coordinated control strategy is designed to manage real-power and reactive-power flows among the different energy sources, the storage unit, and the STATCOM system in the hybrid system. A simulation case study carried out on Western System Coordinating Council (WSCC 3-machine 9-bus test system for the large scale hybrid energy conversion system has been developed using the DIgSILENT/Power Factory software platform. The hybrid system performance under different scenarios has been verified by simulation studies using practical load demand profiles and real weather data.

  13. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  14. Large scale tracking algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  15. Research project on CO2 geological storage and groundwaterresources: Large-scale hydrological evaluation and modeling of impact ongroundwater systems

    Energy Technology Data Exchange (ETDEWEB)

    Birkholzer, Jens; Zhou, Quanlin; Rutqvist, Jonny; Jordan,Preston; Zhang,K.; Tsang, Chin-Fu

    2007-10-24

    If carbon dioxide capture and storage (CCS) technologies areimplemented on a large scale, the amounts of CO2 injected and sequesteredunderground could be extremely large. The stored CO2 then replaces largevolumes of native brine, which can cause considerable pressureperturbation and brine migration in the deep saline formations. Ifhydraulically communicating, either directly via updipping formations orthrough interlayer pathways such as faults or imperfect seals, theseperturbations may impact shallow groundwater or even surface waterresources used for domestic or commercial water supply. Possibleenvironmental concerns include changes in pressure and water table,changes in discharge and recharge zones, as well as changes in waterquality. In compartmentalized formations, issues related to large-scalepressure buildup and brine displacement may also cause storage capacityproblems, because significant pressure buildup can be produced. Toaddress these issues, a three-year research project was initiated inOctober 2006, the first part of which is summarized in this annualreport.

  16. An aggregate model of grid-connected, large-scale, offshore wind farm for power stability investigations-importance of windmill mechanical system

    DEFF Research Database (Denmark)

    Akhmatov, Vladislav; Knudsen, H.

    2002-01-01

    An aggregate model of a large-scale offshore wind farm, comprising 72 wind turbines of 2 MW rating each, is set up. Representation of the shaft systems of the wind turbines shall be taken into account when a simplified aggregate model of the wind farm is used in voltage stability investigations. ...... and the entire network. All these phenomena are different compared to previous experiences with modelling of conventional power plants with synchronous generators and stiff shaft systems.......An aggregate model of a large-scale offshore wind farm, comprising 72 wind turbines of 2 MW rating each, is set up. Representation of the shaft systems of the wind turbines shall be taken into account when a simplified aggregate model of the wind farm is used in voltage stability investigations....... Because the shaft system gives a soft coupling between the rotating wind turbine and the induction generator, the large-scale wind farm cannot always be reduced to one-machine equivalent and use of multi-machine equivalents will be necessary for reaching accuracy of the investigation results...

  17. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  18. Testing gravity on Large Scales

    Directory of Open Access Journals (Sweden)

    Raccanelli Alvise

    2013-09-01

    Full Text Available We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep surveys those corrections need to be taken into account if we want to measure the growth of structures at a few percent level, and so perform tests on gravity, without introducing systematic errors. Finally, we report the results of some recent cosmological model tests carried out using those precise models.

  19. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence

    Science.gov (United States)

    Dogan, Eda; Hearst, R. Jason; Ganapathisubramani, Bharathram

    2017-03-01

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to `simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.

  20. Large-Scale Mass Spectrometry Imaging Investigation of Consequences of Cortical Spreading Depression in a Transgenic Mouse Model of Migraine

    Science.gov (United States)

    Carreira, Ricardo J.; Shyti, Reinald; Balluff, Benjamin; Abdelmoula, Walid M.; van Heiningen, Sandra H.; van Zeijl, Rene J.; Dijkstra, Jouke; Ferrari, Michel D.; Tolner, Else A.; McDonnell, Liam A.; van den Maagdenberg, Arn M. J. M.

    2015-06-01

    Cortical spreading depression (CSD) is the electrophysiological correlate of migraine aura. Transgenic mice carrying the R192Q missense mutation in the Cacna1a gene, which in patients causes familial hemiplegic migraine type 1 (FHM1), exhibit increased propensity to CSD. Herein, mass spectrometry imaging (MSI) was applied for the first time to an animal cohort of transgenic and wild type mice to study the biomolecular changes following CSD in the brain. Ninety-six coronal brain sections from 32 mice were analyzed by MALDI-MSI. All MSI datasets were registered to the Allen Brain Atlas reference atlas of the mouse brain so that the molecular signatures of distinct brain regions could be compared. A number of metabolites and peptides showed substantial changes in the brain associated with CSD. Among those, different mass spectral features showed significant ( t-test, P migraine pathophysiology. The results also demonstrate the utility of aligning MSI datasets to a common reference atlas for large-scale MSI investigations.

  1. Data Analysis, Pre-Ignition Assessment, and Post-Ignition Modeling of the Large-Scale Annular Cookoff Tests

    Energy Technology Data Exchange (ETDEWEB)

    G. Terrones; F.J. Souto; R.F. Shea; M.W.Burkett; E.S. Idar

    2005-09-30

    In order to understand the implications that cookoff of plastic-bonded explosive-9501 could have on safety assessments, we analyzed the available data from the large-scale annular cookoff (LSAC) assembly series of experiments. In addition, we examined recent data regarding hypotheses about pre-ignition that may be relevant to post-ignition behavior. Based on the post-ignition data from Shot 6, which had the most complete set of data, we developed an approximate equation of state (EOS) for the gaseous products of deflagration. Implementation of this EOS into the multimaterial hydrodynamics computer program PAGOSA yielded good agreement with the inner-liner collapse sequence for Shot 6 and with other data, such as velocity interferometer system for any reflector and resistance wires. A metric to establish the degree of symmetry based on the concept of time of arrival to pin locations was used to compare numerical simulations with experimental data. Several simulations were performed to elucidate the mode of ignition in the LSAC and to determine the possible compression levels that the metal assembly could have been subjected to during post-ignition.

  2. Assessing the weighted multi-objective adaptive surrogate model optimization to derive large-scale reservoir operating rules with sensitivity analysis

    Science.gov (United States)

    Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao

    2017-01-01

    The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.

  3. Network robustness under large-scale attacks

    CERN Document Server

    Zhou, Qing; Liu, Ruifang; Cui, Shuguang

    2014-01-01

    Network Robustness under Large-Scale Attacks provides the analysis of network robustness under attacks, with a focus on large-scale correlated physical attacks. The book begins with a thorough overview of the latest research and techniques to analyze the network responses to different types of attacks over various network topologies and connection models. It then introduces a new large-scale physical attack model coined as area attack, under which a new network robustness measure is introduced and applied to study the network responses. With this book, readers will learn the necessary tools to evaluate how a complex network responds to random and possibly correlated attacks.

  4. The Atmospheric Energy Budget and Large-Scale Precipitation Efficiency of Convective Systems during TOGA COARE, GATE, SCSMEX, and ARM: Cloud-Resolving Model Simulations.

    Science.gov (United States)

    Tao, W.-K.; Johnson, D.; Shie, C.-L.; Simpson, J.

    2004-10-01

    A two-dimensional version of the Goddard Cumulus Ensemble (GCE) model is used to simulate convective systems that developed in various geographic locations (east Atlantic, west Pacific, South China Sea, and Great Plains in the United States). Observed large-scale advective tendencies for potential temperature, water vapor mixing ratio, and horizontal momentum derived from field campaigns are used as the main forcing. The atmospheric temperature and water vapor budgets from the model results show that the two largest terms are net condensation (heating/drying) and imposed large-scale forcing (cooling/moistening) for tropical oceanic cases though not for midlatitude continental cases. These two terms are opposite in sign, however, and are not the dominant terms in the moist static energy budget.The balance between net radiation, surface latent heat flux, and net condensational heating vary in these tropical cases, however. For cloud systems that developed over the South China Sea and eastern Atlantic, net radiation (cooling) is not negligible in the temperature budget; it is as large as 20% of the net condensation. However, shortwave heating and longwave cooling are in balance with each other for cloud systems over the west Pacific region such that the net radiation is very small. This is due to the thick anvil clouds simulated in the cloud systems over the Pacific region. The large-scale advection of moist static energy is negative, as a result of a larger absolute value of large-scale advection of sensible heat (cooling) compared to large-scale latent heat (moistening) advection in the Pacific and Atlantic cases. For three cloud systems that developed over a midlatitude continent, the net radiation and sensible and latent heat fluxes play a much more important role. This means that the accurate measurement of surface fluxes and radiation is crucial for simulating these midlatitude cases.The results showed that large-scale mean (multiday) precipitation efficiency

  5. Large Scale Correlation Clustering Optimization

    CERN Document Server

    Bagon, Shai

    2011-01-01

    Clustering is a fundamental task in unsupervised learning. The focus of this paper is the Correlation Clustering functional which combines positive and negative affinities between the data points. The contribution of this paper is two fold: (i) Provide a theoretic analysis of the functional. (ii) New optimization algorithms which can cope with large scale problems (>100K variables) that are infeasible using existing methods. Our theoretic analysis provides a probabilistic generative interpretation for the functional, and justifies its intrinsic "model-selection" capability. Furthermore, we draw an analogy between optimizing this functional and the well known Potts energy minimization. This analogy allows us to suggest several new optimization algorithms, which exploit the intrinsic "model-selection" capability of the functional to automatically recover the underlying number of clusters. We compare our algorithms to existing methods on both synthetic and real data. In addition we suggest two new applications t...

  6. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  7. Development and Validation of a One-Dimensional Co-Electrolysis Model for Use in Large-Scale Process Modeling Analysis

    Energy Technology Data Exchange (ETDEWEB)

    J. E. O' Brien; M. G. McKellar; G. L. Hawkes; C. M. Stoots

    2007-07-01

    A one-dimensional chemical equilibrium model has been developed for analysis of simultaneous high-temperature electrolysis of steam and carbon dioxide (coelectrolysis) for the direct production of syngas, a mixture of hydrogen and carbon monoxide. The model assumes local chemical equilibrium among the four process-gas species via the shift reaction. For adiabatic or specified-heat-transfer conditions, the electrolyzer model allows for the determination of coelectrolysis outlet temperature, composition (anode and cathode sides), mean Nernst potential, operating voltage and electrolyzer power based on specified inlet gas flow rates, heat loss or gain, current density, and cell area-specific resistance. Alternately, for isothermal operation, it allows for determination of outlet composition, mean Nernst potential, operating voltage, electrolyzer power, and the isothermal heat requirement for specified inlet gas flow rates, operating temperature, current density and area-specific resistance. This model has been developed for incorporation into a system-analysis code from which the overall performance of large-scale coelectrolysis plants can be evaluated. The one-dimensional co-electrolysis model has been validated by comparison with results obtained from a 3-D computational fluid dynamics model and by comparison with experimental results.

  8. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    Science.gov (United States)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model

  9. Computation of large scale currents in the Arabian Sea during winter using a semi-diagnostic model

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    A 3-dimensional, semi-diagnostic model with 331 levels in the vertical has been used for the computation of climatic circulation in the western tropical Indian Ocean. Model is driven with the seasonal mean data on wind stress, temperature...

  10. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  11. Large-Scale Water Resources Management within the Framework of GLOWA-Danube - Part A: The Groundwater Model

    Science.gov (United States)

    Barthel, R.; Rojanschi, V.; Wolf, J.; Braun, J.

    2003-04-01

    The interdisciplinary research co-operation Glowa-Danube aims at the development of innovative techniques, scenarios and strategies to investigate the impacts of Global Change on the hydrological cycle within the catchment area of the Upper Danube Basin (Gauge Passau). Both the influence of natural changes in the ecosystem, such as climate change, and changes in human behavior, such as changes in land use or water consumption, are considered. A globally applicable decision support tool "DANUBIA" that comprises 15 individual disciplinary models will be developed. The models are connected with each other via customized interfaces that facilitate network-based parallel calculations. The strictly object-oriented DANUBIA architecture was developed using the graphical notation tool UML (Unified Modeling Language) and has been implemented in Java code. The Institute of Hydraulic Engineering of the Universitaet Stuttgart contributes two models to DANUBIA: A groundwater flow and transport model and a water supply model. The latter is dealt with in a second contribution to this conference. This paper focuses on the groundwater model. The catchment basin of the Upper Danube covers an area of approximately 77.000 km2. The elevation difference from the highest peaks of the Alps to the lowest flatlands in the Danube valley is more than 3.000 m. In addition to the Alps, several lower mountain ranges such as the Black Forest, the Swabian and Franconian Alb and the Bavarian Forest are located respectively in the Northeast, North and Northwest of the basin. The climatic conditions, geomorphology, geology and land use show a wide range of different characteristics. The size and heterogeneity of the area make it extremely difficult to represent the natural conditions in a numerical model. Both data availability and accessibility add to the difficulties that one encounters in the approach to simulate groundwater flow and contaminant transport in this area. The groundwater flow model of

  12. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    Science.gov (United States)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  13. A NOVEL SHIP DETECTION METHOD FOR LARGE-SCALE OPTICAL SATELLITE IMAGES BASED ON VISUAL LBP FEATURE AND VISUAL ATTENTION MODEL

    Directory of Open Access Journals (Sweden)

    S. Haigang

    2016-06-01

    Full Text Available Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP. Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM. After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  14. Is large-scale inverse modelling of unsaturated flow with areal average evaporation and surface soil moisture as estimated from remote sensing feasible?

    Science.gov (United States)

    Feddes, R. A.; Menenti, M.; Kabat, P.; Bastiaanssen, W. G. M.

    1993-03-01

    The potentiality of combining large-scale inverse modelling of unsaturated flow with remote sensing determination of areal evaporation and areal surface moisture is assessed. Regional latent and sensible heat fluxes are estimated indirectly using remotely sensed measurements by parameterizing the surface energy balance equation. An example of evapotranspiration mapping from northern and central Egypt is presented. The inverse problem is formulated with respect to the type of information available. Two examples of estimation of soil hydraulic properties by the dynamic one-dimensional soil-water-vegetation model SWATRE are given: one refers to a classical lysimeter scale and another one to a catchment scale. It is concluded that small-scale soil physics may describe large-scale hydrological behaviour adequately, and that the effective hydraulic parameters concerned may be derived by an inverse modelling approach. Remotely sensed data on surface reflectance, surface temperature and soil moisture content derived from multifrequency microwave techniques provide a useful data set on the mesoscale. The inverse modelling approach presented combined with a meso-scale data set on evaporation and surface soil moisture, considerable potentialities arise to determine effective meso-scale hydraulic properties.

  15. Modelling of large-scale structures arising under developed turbulent convection in a horizontal fluid layer (with application to the problem of tropical cyclone origination

    Directory of Open Access Journals (Sweden)

    G. V. Levina

    2000-01-01

    Full Text Available The work is concerned with the results of theoretical and laboratory modelling the processes of the large-scale structure generation under turbulent convection in the rotating-plane horizontal layer of an incompressible fluid with unstable stratification. The theoretical model describes three alternative ways of creating unstable stratification: a layer heating from below, a volumetric heating of a fluid with internal heat sources and combination of both factors. The analysis of the model equations show that under conditions of high intensity of the small-scale convection and low level of heat loss through the horizontal layer boundaries a long wave instability may arise. The condition for the existence of an instability and criterion identifying the threshold of its initiation have been determined. The principle of action of the discovered instability mechanism has been described. Theoretical predictions have been verified by a series of experiments on a laboratory model. The horizontal dimensions of the experimentally-obtained long-lived vortices are 4÷6 times larger than the thickness of the fluid layer. This work presents a description of the laboratory setup and experimental procedure. From the geophysical viewpoint the examined mechanism of the long wave instability is supposed to be adequate to allow a description of the initial step in the evolution of such large-scale vortices as tropical cyclones - a transition form the small-scale cumulus clouds to the state of the atmosphere involving cloud clusters (the stage of initial tropical perturbation.

  16. Reconstructing annual groundwater storage changes in a large-scale irrigation region using GRACE data and Budyko model

    Science.gov (United States)

    Tang, Yin; Hooshyar, Milad; Zhu, Tingju; Ringler, Claudia; Sun, Alexander Y.; Long, Di; Wang, Dingbao

    2017-08-01

    A two-parameter annual water balance model was developed for reconstructing annual terrestrial water storage change (ΔTWS) and groundwater storage change (ΔGWS). The model was integrated with the Gravity Recovery and Climate Experiment (GRACE) data and applied to the Punjab province in Pakistan for reconstructing ΔTWS and ΔGWS during 1980-2015 based on multiple input data sources. Model parameters were estimated through minimizing the root-mean-square error between the Budyko-modeled and GRACE-derived ΔTWS during 2003-2015. The correlation of ensemble means between Budyko-modeled and GRACE-derived ΔTWS is 0.68 with p-value irrigation regions with parsimonious models.

  17. Comparing large-scale hydrological model predictions with observed streamflow in the Pacific Northwest: effects of climate and groundwater

    Science.gov (United States)

    Mohammad Safeeq; Guillaume S. Mauger; Gordon E. Grant; Ivan Arismendi; Alan F. Hamlet; Se-Yeun Lee

    2014-01-01

    Assessing uncertainties in hydrologic models can improve accuracy in predicting future streamflow. Here, simulated streamflows using the Variable Infiltration Capacity (VIC) model at coarse (1/16°) and fine (1/120°) spatial resolutions were evaluated against observed streamflows from 217 watersheds. In...

  18. A mechanistic modeling system for estimating large-scale emissions and transport of pollen and co-allergens

    Science.gov (United States)

    Efstathiou, Christos; Isukapalli, Sastry; Georgopoulos, Panos

    2011-04-01

    Allergic airway diseases represent a complex health problem which can be exacerbated by the synergistic action of pollen particles and air pollutants such as ozone. Understanding human exposures to aeroallergens requires accurate estimates of the spatial distribution of airborne pollen levels as well as of various air pollutants at different times. However, currently there are no established methods for estimating allergenic pollen emissions and concentrations over large geographic areas such as the United States. A mechanistic modeling system for describing pollen emissions and transport over extensive domains has been developed by adapting components of existing regional scale air quality models and vegetation databases. First, components of the Biogenic Emissions Inventory System (BEIS) were adapted to predict pollen emission patterns. Subsequently, the transport module of the Community Multiscale Air Quality (CMAQ) modeling system was modified to incorporate description of pollen transport. The combined model, CMAQ-pollen, allows for simultaneous prediction of multiple air pollutants and pollen levels in a single model simulation, and uses consistent assumptions related to the transport of multiple chemicals and pollen species. Application case studies for evaluating the combined modeling system included the simulation of birch and ragweed pollen levels for the year 2002, during their corresponding peak pollination periods (April for birch and September for ragweed). The model simulations were driven by previously evaluated meteorological model outputs and emissions inventories for the eastern United States for the simulation period. A semi-quantitative evaluation of CMAQ-pollen was performed using tree and ragweed pollen counts in Newark, NJ for the same time periods. The peak birch pollen concentrations were predicted to occur within two days of the peak measurements, while the temporal patterns closely followed the measured profiles of overall tree pollen

  19. Multi-model climate impact assessment and intercomparison for three large-scale river basins on three continents

    Science.gov (United States)

    Vetter, T.; Huang, S.; Aich, V.; Yang, T.; Wang, X.; Krysanova, V.; Hattermann, F.

    2014-07-01

    Climate change impacts on hydrological processes should be simulated for river basins using validated models and multiple climate scenarios in order to provide reliable results for stakeholders. In the last 10-15 years climate impact assessment was performed for many river basins worldwide using different climate scenarios and models. Nevertheless, the results are hardly comparable and do not allow to create a full picture of impacts and uncertainties. Therefore, a systematic intercomparison of impacts is suggested, which should be done for representative regions using state-of-the-art models. Our study is intended as a step in this direction. The impact assessment presented here was performed for three river basins on three continents: Rhine in Europe, Upper Niger in Africa and Upper Yellow in Asia. For that, climate scenarios from five GCMs and three hydrological models: HBV, SWIM and VIC, were used. Four "Representative Concentration Pathways" (RCPs) covering a range of emissions and land-use change projections were included. The objectives were to analyze and compare climate impacts on future trends considering three runoff quantiles: Q90, Q50 and Q10 and on seasonal water discharge, and to evaluate uncertainties from different sources. The results allow drawing some robust conclusions, but uncertainties are large and shared differently between sources in the studied basins. The robust results in terms of trend direction and slope and changes in seasonal dynamics could be found for the Rhine basin regardless which hydrological model or forcing GCM is used. For the Niger River scenarios from climate models are the largest uncertainty source, providing large discrepancies in precipitation, and therefore clear projections are difficult to do. For the Upper Yellow basin, both the hydrological models and climate models contribute to uncertainty in the impacts, though an increase in high flows in future is a robust outcome assured by all three hydrological models.

  20. Multi-model climate impact assessment and intercomparison for three large-scale river basins on three continents

    Directory of Open Access Journals (Sweden)

    T. Vetter

    2014-07-01

    Full Text Available Climate change impacts on hydrological processes should be simulated for river basins using validated models and multiple climate scenarios in order to provide reliable results for stakeholders. In the last 10–15 years climate impact assessment was performed for many river basins worldwide using different climate scenarios and models. Nevertheless, the results are hardly comparable and do not allow to create a full picture of impacts and uncertainties. Therefore, a systematic intercomparison of impacts is suggested, which should be done for representative regions using state-of-the-art models. Our study is intended as a step in this direction. The impact assessment presented here was performed for three river basins on three continents: Rhine in Europe, Upper Niger in Africa and Upper Yellow in Asia. For that, climate scenarios from five GCMs and three hydrological models: HBV, SWIM and VIC, were used. Four "Representative Concentration Pathways" (RCPs covering a range of emissions and land-use change projections were included. The objectives were to analyze and compare climate impacts on future trends considering three runoff quantiles: Q90, Q50 and Q10 and on seasonal water discharge, and to evaluate uncertainties from different sources. The results allow drawing some robust conclusions, but uncertainties are large and shared differently between sources in the studied basins. The robust results in terms of trend direction and slope and changes in seasonal dynamics could be found for the Rhine basin regardless which hydrological model or forcing GCM is used. For the Niger River scenarios from climate models are the largest uncertainty source, providing large discrepancies in precipitation, and therefore clear projections are difficult to do. For the Upper Yellow basin, both the hydrological models and climate models contribute to uncertainty in the impacts, though an increase in high flows in future is a robust outcome assured by all three