WorldWideScience

Sample records for model large-scale printhead

  1. Bubbles in inkjet printheads: analytical and numerical models

    NARCIS (Netherlands)

    Jeurissen, R.J.M.

    2009-01-01

    The phenomenon of nozzle failure of an inkjet printhead due to entrainment of air bubbles was studies using analytical and numerical models. The studied inkjet printheads consist of many channels in which an acoustic field is generated to eject a droplet. When an air bubble is entrained, it disrupts

  2. Bubbles in inkjet printheads : analytical and numerical models

    NARCIS (Netherlands)

    Jeurissen, R.J.M.

    2009-01-01

    The phenomenon of nozzle failure of an inkjet printhead due to entrainment of air bubbles was studies using analytical and numerical models. The studied inkjet printheads consist of many channels in which an acoustic field is generated to eject a droplet. When an air bubble is entrained, it disrupts

  3. Inkjet printhead and inkjet printer containing the same printhead

    NARCIS (Netherlands)

    2006-01-01

    A inkjet printhead containing two substantially closed ink chambers separated by a wall, each of the chambers having associated therewith an electromechanical converter, where actuation of the converter corresponding to a first chamber of said printhead will lead to a volume change in a second

  4. Printhead and inkjet printer comprising such a printhead

    NARCIS (Netherlands)

    2007-01-01

    The invention relates to a printhead comprising multiple substantially closed ink chambers (13), the ink chambers being mutually separated by at least one wall (12), wherein each of the chambers comprises an electro-mechanical converter (15), where actuation of the converter leads to a volume change

  5. The dynamics of the piezo inkjet printhead operation

    NARCIS (Netherlands)

    Wijshoff, H.M.A.

    2010-01-01

    The operation of a piezo inkjet printhead involves a chain of processes in many physical domains at different length scales. The final goal is the formation of droplets of all kinds of fluids with any desired volume, velocity, and a reliability as high as possible. The physics behind the chain of

  6. Characteristics of an electro-rheological fluid valve used in an inkjet printhead

    Science.gov (United States)

    Lee, C. Y.; Liao, W. C.

    2000-12-01

    The demand for non-impact printers has grown considerably with the advent of personal computers. For entry-level mass production, two drop-on-demand techniques have dominated the market - piezoelectric impulse and thermal-bubble types. However, the high cost of the piezoelectric printhead and the thermal problems encountered by the thermal-bubble jet printhead have restrained the use of these techniques in an array-type printhead. In this study, we propose a new design of printhead with an electro-rheological (ER) fluid acting as a control medium. The ER fluid valve controls the ink ejection. As a first step toward developing this new printhead, the characteristics of an ER fluid valve which controls the deflection of the elastic diaphragm are investigated. First, the response of a prototype is tested experimentally to prove the feasibility of using this ER valve for the inkjet printhead. Then, the discretized governing equation of the ER valve is derived. Finally, the prototype of the ER valve is fabricated. The experimental measurement based on the sinusoidal response verifies both the theoretical analysis and the controllability of the response of the ER valve by the applied electric field.

  7. Effects of dwell time of excitation waveform on meniscus movements for a tubular piezoelectric print-head: experiments and model

    Science.gov (United States)

    Chang, Jiaqing; Liu, Yaxin; Huang, Bo

    2017-07-01

    In inkjet applications, it is normal to search for an optimal drive waveform when dispensing a fresh fluid or adjusting a newly fabricated print-head. To test trial waveforms with different dwell times, a camera and a strobe light were used to image the protruding or retracting liquid tongues without ejecting any droplets. An edge detection method was used to calculate the lengths of the liquid tongues to draw the meniscus movement curves. The meniscus movement is determined by the time-domain response of the acoustic pressure at the nozzle of the print-head. Starting at the inverse piezoelectric effect, a mathematical model which considers the liquid viscosity in acoustic propagation is constructed to study the acoustic pressure response at the nozzle of the print-head. The liquid viscosity retards the propagation speed and dampens the harmonic amplitude. The pressure response, which is the combined effect of the acoustic pressures generated during the rising time and the falling time and after their propagations and reflections, explains the meniscus movements well. Finally, the optimal dwell time for droplet ejections is discussed.

  8. Effects of dwell time of excitation waveform on meniscus movements for a tubular piezoelectric print-head: experiments and model

    International Nuclear Information System (INIS)

    Chang, Jiaqing; Liu, Yaxin; Huang, Bo

    2017-01-01

    In inkjet applications, it is normal to search for an optimal drive waveform when dispensing a fresh fluid or adjusting a newly fabricated print-head. To test trial waveforms with different dwell times, a camera and a strobe light were used to image the protruding or retracting liquid tongues without ejecting any droplets. An edge detection method was used to calculate the lengths of the liquid tongues to draw the meniscus movement curves. The meniscus movement is determined by the time-domain response of the acoustic pressure at the nozzle of the print-head. Starting at the inverse piezoelectric effect, a mathematical model which considers the liquid viscosity in acoustic propagation is constructed to study the acoustic pressure response at the nozzle of the print-head. The liquid viscosity retards the propagation speed and dampens the harmonic amplitude. The pressure response, which is the combined effect of the acoustic pressures generated during the rising time and the falling time and after their propagations and reflections, explains the meniscus movements well. Finally, the optimal dwell time for droplet ejections is discussed. (paper)

  9. Intelligent Adjustment of Printhead Driving Waveform Parameters for 3D Electronic Printing

    Directory of Open Access Journals (Sweden)

    Lin Na

    2017-01-01

    Full Text Available In practical applications of 3D electronic printing, a major challenge is to adjust the printhead for a high print resolution and accuracy. However, an exhausting manual selective process inevitably wastes a lot of time. Therefore, in this paper, we proposed a new intelligent adjustment method, which adopts artificial bee colony algorithm to optimize the printhead driving waveform parameters for getting the desired printhead state. Experimental results show that this method can quickly and accuracy find out the suitable combination of driving waveform parameters to meet the needs of applications.

  10. Design and fabrication of a PET/PTFE-based piezoelectric squeeze mode drop-on-demand inkjet printhead with interchangeable nozzle

    KAUST Repository

    Li, Erqiang

    2010-09-01

    A PET/PTFE-based piezoelectric squeeze mode drop-on-demand inkjet printhead with interchangeable nozzles is designed and fabricated. The printhead chamber is comprised of PET (polyethylene terephthalate) tubing or PTFE (polytetrafluoroethylene, or Teflon) tubing, which of a much softer material, than the conventionally used glass tubing. Applying the same electrical voltage, PET/PTFE-based printhead will generate a larger volume change in the material to be dispensed. The novel printhead fabricated herein has successfully dispensed liquids with viscosities up to 100 cps, as compared to 20 cps for the commercial printheads. Furthermore, PTFE-based printhead provides excellent anti-corrosive property when strongly corrosive inks are involved. The interchangeable nozzle design enables the same printhead to be fitted with nozzles of different orifice size, thus a clogged nozzle can be easily removed for cleaning or replacement. The characteristics of this novel printhead are also studied by dispensing glycerin-water solutions. © 2010 Elsevier B.V. All rights reserved.

  11. Air entrapment in piezo-driven inkjet printheads

    NARCIS (Netherlands)

    de Jong, J.; de Bruin, G.J.; de Bruin, Gerrit; Reinten, Hans; van den Berg, Marc; Wijshoff, Herman; Wijshoff, H.; Versluis, Michel; Lohse, Detlef

    2006-01-01

    The stability of inkjet printers is a major requirement for high-quality-printing. However, in piezo-driven inkjet printheads, air entrapment can lead to malfunctioning of the jet formation. The piezoactuator is employed to actively monitor the channel acoustics and to identify distortions at an

  12. Performance improvement of a drop-on-demand inkjet printhead using an optimization-based feedforward control method

    NARCIS (Netherlands)

    Khalate, A.; Bombois, X.; Babuska, R.; Wijshoff, H.M.A.; Waarsing, R.

    2011-01-01

    The printing quality delivered by a drop-on-demand (DoD) inkjet printhead is limited due to the residual oscillations in the ink channel. The maximal jetting frequency of a DoD inkjet printhead can be increased by quickly damping the residual oscillations and by bringing in this way the ink channel

  13. Drop-on-Demand Inkjet Printhead Performance Enhancement by Dynamic Lumped Element Modeling for Printable Electronics Fabrication

    Directory of Open Access Journals (Sweden)

    Maowei He

    2014-01-01

    Full Text Available The major challenge in printable electronics fabrication is the print resolution and accuracy. In this paper, the dynamic lumped element model (DLEM is proposed to directly simulate an inkjet-printed nanosilver droplet formation process and used for predictively controlling jetting characteristics. The static lumped element model (LEM previously developed by the authors is extended to dynamic model with time-varying equivalent circuits to characterize nonlinear behaviors of piezoelectric printhead. The model is then used to investigate how performance of the piezoelectric ceramic actuator influences jetting characteristics of nanosilver ink. Finally, the proposed DLEM is applied to predict the printing quality using nanosilver ink. Experimental results show that, compared to other analytic models, the proposed DLEM has a simpler structure with the sufficient simulation and prediction accuracy.

  14. Design and fabrication of a PET/PTFE-based piezoelectric squeeze mode drop-on-demand inkjet printhead with interchangeable nozzle

    KAUST Repository

    Li, Erqiang; Xu, Qian; Sun, Jie; Fuh, Jerry; Wong, Yokesan; Thoroddsen, Sigurdur T

    2010-01-01

    , or Teflon) tubing, which of a much softer material, than the conventionally used glass tubing. Applying the same electrical voltage, PET/PTFE-based printhead will generate a larger volume change in the material to be dispensed. The novel printhead fabricated

  15. Adjusting inkjet printhead parameters to deposit drugs into micro-sized reservoirs

    Directory of Open Access Journals (Sweden)

    Mau Robert

    2016-09-01

    Full Text Available Drug delivery systems (DDS ensure that therapeutically effective drug concentrations are delivered locally to the target site. For that reason, it is common to coat implants with a degradable polymer which contains drugs. However, the use of polymers as a drug carrier has been associated with adverse side effects. For that reason, several technologies have been developed to design polymer-free DDS. In literature it has been shown that micro-sized reservoirs can be applied as drug reservoirs. Inkjet techniques are capable of depositing drugs into these reservoirs. In this study, two different geometries of micro-sized reservoirs have been laden with a drug (ASA using a drop-on-demand inkjet printhead. Correlations between the characteristics of the drug solution, the operating parameters of the printhead and the geometric parameters of the reservoir are shown. It is indicated that wettability of the surface play a key role for drug deposition into micro-sized reservoirs.

  16. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  17. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  18. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  19. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  20. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  1. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  2. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  3. Potential up-scaling of inkjet-printed devices for logical circuits in flexible electronics

    Energy Technology Data Exchange (ETDEWEB)

    Mitra, Kalyan Yoti, E-mail: kalyan-yoti.mitra@mb.tu-chemnitz.de, E-mail: enrico.sowade@mb.tu-chemnitz.de; Sowade, Enrico, E-mail: kalyan-yoti.mitra@mb.tu-chemnitz.de, E-mail: enrico.sowade@mb.tu-chemnitz.de [Technische Universität Chemnitz, Department of Digital Printing and Imaging Technology, Chemnitz (Germany); Martínez-Domingo, Carme [Printed Microelectronics Group, CAIAC, Universitat Autònoma de Barcelona, Bellaterra, Spain and Nanobioelectronics and Biosensors Group, Catalan Institute of Nanotechnology (ICN), Universitat Autònoma de Barcelona, Bellaterra, Catalonia (Spain); Ramon, Eloi, E-mail: eloi.ramon@uab.cat [Printed Microelectronics Group, CAIAC, Universitat Autònoma de Barcelona, Bellaterra (Spain); Nanobioelectronics and Biosensors Group, Catalan Institute of Nanotechnology (ICN), Universitat Autònoma de Barcelona, Bellaterra, Catalonia (Spain); Carrabina, Jordi, E-mail: jordi.carrabina@uab.cat [Printed Microelectronics Group, CAIAC, Universitat Autònoma de Barcelona, Bellaterra (Spain); Gomes, Henrique Leonel, E-mail: hgomes@ualg.pt [Universidade do Algarve, Institute of Telecommunications, Faro (Portugal); Baumann, Reinhard R., E-mail: reinhard.baumann@mb.tu-chemnitz.de [Technische Universität Chemnitz, Department of Digital Printing and Imaging Technology, Chemnitz (Germany); Fraunhofer Institute for Electronic Nano Systems (ENAS), Department of Printed Functionalities, Chemnitz (Germany)

    2015-02-17

    Inkjet Technology is often mis-believed to be a deposition/patterning technology which is not meant for high fabrication throughput in the field of printed and flexible electronics. In this work, we report on the 1) printing, 2) fabrication yield and 3) characterization of exemplary simple devices e.g. capacitors, organic transistors etc. which are the basic building blocks for logical circuits. For this purpose, printing is performed first with a Proof of concept Inkjet printing system Dimatix Material Printer 2831 (DMP 2831) using 10 pL small print-heads and then with Dimatix Material Printer 3000 (DMP 3000) using 35 pL industrial print-heads (from Fujifilm Dimatix). Printing at DMP 3000 using industrial print-heads (in Sheet-to-sheet) paves the path towards industrialization which can be defined by printing in Roll-to-Roll format using industrial print-heads. This pavement can be termed as 'Bridging Platform'. This transfer to 'Bridging Platform' from 10 pL small print-heads to 35 pL industrial print-heads help the inkjet-printed devices to evolve on the basis of functionality and also in form of up-scaled quantities. The high printed quantities and yield of inkjet-printed devices justify the deposition reliability and potential to print circuits. This reliability is very much desired when it comes to printing of circuits e.g. inverters, ring oscillator and any other planned complex logical circuits which require devices e.g. organic transistors which needs to get connected in different staged levels. Also, the up-scaled inkjet-printed devices are characterized and they reflect a domain under which they can work to their optimal status. This status is much wanted for predicting the real device functionality and integration of them into a planned circuit.

  4. Potential up-scaling of inkjet-printed devices for logical circuits in flexible electronics

    International Nuclear Information System (INIS)

    Mitra, Kalyan Yoti; Sowade, Enrico; Martínez-Domingo, Carme; Ramon, Eloi; Carrabina, Jordi; Gomes, Henrique Leonel; Baumann, Reinhard R.

    2015-01-01

    Inkjet Technology is often mis-believed to be a deposition/patterning technology which is not meant for high fabrication throughput in the field of printed and flexible electronics. In this work, we report on the 1) printing, 2) fabrication yield and 3) characterization of exemplary simple devices e.g. capacitors, organic transistors etc. which are the basic building blocks for logical circuits. For this purpose, printing is performed first with a Proof of concept Inkjet printing system Dimatix Material Printer 2831 (DMP 2831) using 10 pL small print-heads and then with Dimatix Material Printer 3000 (DMP 3000) using 35 pL industrial print-heads (from Fujifilm Dimatix). Printing at DMP 3000 using industrial print-heads (in Sheet-to-sheet) paves the path towards industrialization which can be defined by printing in Roll-to-Roll format using industrial print-heads. This pavement can be termed as 'Bridging Platform'. This transfer to 'Bridging Platform' from 10 pL small print-heads to 35 pL industrial print-heads help the inkjet-printed devices to evolve on the basis of functionality and also in form of up-scaled quantities. The high printed quantities and yield of inkjet-printed devices justify the deposition reliability and potential to print circuits. This reliability is very much desired when it comes to printing of circuits e.g. inverters, ring oscillator and any other planned complex logical circuits which require devices e.g. organic transistors which needs to get connected in different staged levels. Also, the up-scaled inkjet-printed devices are characterized and they reflect a domain under which they can work to their optimal status. This status is much wanted for predicting the real device functionality and integration of them into a planned circuit

  5. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  6. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  7. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  8. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  9. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  10. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  11. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  12. Large transverse momentum processes in a non-scaling parton model

    International Nuclear Information System (INIS)

    Stirling, W.J.

    1977-01-01

    The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)

  13. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  14. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  15. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  16. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  17. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  18. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  19. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  20. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  1. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  2. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  3. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  4. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  5. Small scale models equal large scale savings

    International Nuclear Information System (INIS)

    Lee, R.; Segroves, R.

    1994-01-01

    A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)

  6. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  7. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  8. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...

  9. Large-scale building energy efficiency retrofit: Concept, model and control

    International Nuclear Information System (INIS)

    Wu, Zhou; Wang, Bo; Xia, Xiaohua

    2016-01-01

    BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.

  10. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  11. Modeling the impact of large-scale energy conversion systems on global climate

    International Nuclear Information System (INIS)

    Williams, J.

    There are three energy options which could satisfy a projected energy requirement of about 30 TW and these are the solar, nuclear and (to a lesser extent) coal options. Climate models can be used to assess the impact of large scale deployment of these options. The impact of waste heat has been assessed using energy balance models and general circulation models (GCMs). Results suggest that the impacts are significant when the heat imput is very high and studies of more realistic scenarios are required. Energy balance models, radiative-convective models and a GCM have been used to study the impact of doubling the atmospheric CO 2 concentration. State-of-the-art models estimate a surface temperature increase of 1.5-3.0 0 C with large amplification near the poles, but much uncertainty remains. Very few model studies have been made of the impact of particles on global climate, more information on the characteristics of particle input are required. The impact of large-scale deployment of solar energy conversion systems has received little attention but model studies suggest that large scale changes in surface characteristics associated with such systems (surface heat balance, roughness and hydrological characteristics and ocean surface temperature) could have significant global climatic effects. (Auth.)

  12. Detonation and fragmentation modeling for the description of large scale vapor explosions

    International Nuclear Information System (INIS)

    Buerger, M.; Carachalios, C.; Unger, H.

    1985-01-01

    The thermal detonation modeling of large-scale vapor explosions is shown to be indispensable for realistic safety evaluations. A steady-state as well as transient detonation model have been developed including detailed descriptions of the dynamics as well as the fragmentation processes inside a detonation wave. Strong restrictions for large-scale vapor explosions are obtained from this modeling and they indicate that the reactor pressure vessel would even withstand explosions with unrealistically high masses of corium involved. The modeling is supported by comparisons with a detonation experiment and - concerning its key part - hydronamic fragmentation experiments. (orig.) [de

  13. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...

  14. Dynamic subgrid scale model of large eddy simulation of cross bundle flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1996-01-01

    The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  15. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  16. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  17. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  18. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  19. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  20. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  1. The large-scale peculiar velocity field in flat models of the universe

    International Nuclear Information System (INIS)

    Vittorio, N.; Turner, M.S.

    1986-10-01

    The inflationary Universe scenario predicts a flat Universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models are examined with two components of mass density, where one of the components of mass density is smoothly distributed and the large-scale (≥10h -1 MpC) peculiar velocity field for these models is considered. For the smooth component relativistic particles, a relic cosmological term, and light strings are considered. At present the observational situation is unsettled; but, in principle, the large-scale peculiar velocity field is very powerful discriminator between these different models. 61 refs

  2. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, Christian [Monash Univ., Melbourne, VIC (Australia)

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  3. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in

  4. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed

  5. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  6. Perturbation theory instead of large scale shell model calculations

    International Nuclear Information System (INIS)

    Feldmeier, H.; Mankos, P.

    1977-01-01

    Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de

  7. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    Large-scale groundwater models involving aquifers and basins of multiple countries are still rare due to a lack of hydrogeological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global

  8. Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling

    Science.gov (United States)

    Her, Y. G.

    2017-12-01

    Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological

  9. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    Science.gov (United States)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  10. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  11. Towards a 'standard model' of large scale structure formation

    International Nuclear Information System (INIS)

    Shafi, Q.

    1994-01-01

    We explore constraints on inflationary models employing data on large scale structure mainly from COBE temperature anisotropies and IRAS selected galaxy surveys. In models where the tensor contribution to the COBE signal is negligible, we find that the spectral index of density fluctuations n must exceed 0.7. Furthermore the COBE signal cannot be dominated by the tensor component, implying n > 0.85 in such models. The data favors cold plus hot dark matter models with n equal or close to unity and Ω HDM ∼ 0.2 - 0.35. Realistic grand unified theories, including supersymmetric versions, which produce inflation with these properties are presented. (author). 46 refs, 8 figs

  12. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  13. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    Science.gov (United States)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat

  14. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  15. Protein homology model refinement by large-scale energy optimization.

    Science.gov (United States)

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  16. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  17. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...

  18. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  19. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  20. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  1. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  2. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  3. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  4. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.

    2014-01-01

    of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov

  5. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  6. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    Directory of Open Access Journals (Sweden)

    E. H. Sutanudjaja

    2011-09-01

    Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  7. Development of a transverse mixing model for large scale impulsion phenomenon in tight lattice

    International Nuclear Information System (INIS)

    Liu, Xiaojing; Ren, Shuo; Cheng, Xu

    2017-01-01

    Highlights: • Experiment data of Krauss is used to validate the feasibility of CFD simulation method. • CFD simulation is performed to simulate the large scale impulsion phenomenon for tight-lattice bundle. • A mixing model to simulate the large scale impulsion phenomenon is proposed based on CFD result fitting. • The new developed mixing model has been added in the subchannel code. - Abstract: Tight-lattice is widely adopted in the innovative reactor fuel bundles design since it can increase the conversion ratio and improve the heat transfer between fuel bundles and coolant. It has been noticed that a large scale impulsion of cross-velocity exists in the gap region, which plays an important role on the transverse mixing flow and heat transfer. Although many experiments and numerical simulation have been carried out to study the impulsion of velocity, a model to describe the wave length, amplitude and frequency of mixing coefficient is still missing. This research work takes advantage of the CFD method to simulate the experiment of Krauss and to compare experiment data and simulation result in order to demonstrate the feasibility of simulation method and turbulence model. Then, based on this verified method and model, several simulations are performed with different Reynolds number and different Pitch-to-Diameter ratio. By fitting the CFD results achieved, a mixing model to simulate the large scale impulsion phenomenon is proposed and adopted in the current subchannel code. The new mixing model is applied to some fuel assembly analysis by subchannel calculation, it can be noticed that the new developed mixing model can reduce the hot channel factor and contribute to a uniform distribution of outlet temperature.

  8. An industrial perspective on bioreactor scale-down: what we can learn from combined large-scale bioprocess and model fluid studies.

    Science.gov (United States)

    Noorman, Henk

    2011-08-01

    For industrial bioreactor design, operation, control and optimization, the scale-down approach is often advocated to efficiently generate data on a small scale, and effectively apply suggested improvements to the industrial scale. In all cases it is important to ensure that the scale-down conditions are representative of the real large-scale bioprocess. Progress is hampered by limited detailed and local information from large-scale bioprocesses. Complementary to real fermentation studies, physical aspects of model fluids such as air-water in large bioreactors provide useful information with limited effort and cost. Still, in industrial practice, investments of time, capital and resources often prohibit systematic work, although, in the end, savings obtained in this way are trivial compared to the expenses that result from real process disturbances, batch failures, and non-flyers with loss of business opportunity. Here we try to highlight what can be learned from real large-scale bioprocess in combination with model fluid studies, and to provide suitable computation tools to overcome data restrictions. Focus is on a specific well-documented case for a 30-m(3) bioreactor. Areas for further research from an industrial perspective are also indicated. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    International Nuclear Information System (INIS)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-01-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  10. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    Science.gov (United States)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  11. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    Energy Technology Data Exchange (ETDEWEB)

    Reyes, Luz M., E-mail: luzmarinareyes@gmail.com [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Moreno, Claudia, E-mail: claudia.moreno@cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Madriz Aguilar, Jose Edgar, E-mail: edgar.madriz@red.cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Bellini, Mauricio, E-mail: mbellini@mdp.edu.ar [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata (UNMdP), Funes 3350, C.P. 7600, Mar del Plata (Argentina); Instituto de Investigaciones Fisicas de Mar del Plata (IFIMAR) - Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)

    2012-10-22

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  12. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  13. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    Science.gov (United States)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach

  14. A large-scale multi-species spatial depletion model for overwintering waterfowl

    NARCIS (Netherlands)

    Baveco, J.M.; Kuipers, H.; Nolet, B.A.

    2011-01-01

    In this paper, we develop a model to evaluate the capacity of accommodation areas for overwintering waterfowl, at a large spatial scale. Each day geese are distributed over roosting sites. Based on the energy minimization principle, the birds daily decide which surrounding fields to exploit within

  15. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    Science.gov (United States)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  16. Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations

    Directory of Open Access Journals (Sweden)

    C. Orbe

    2018-05-01

    Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  17. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  18. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  19. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    Directory of Open Access Journals (Sweden)

    Sam Ali Al

    2015-01-01

    Full Text Available The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simulations and Direct Numerical Simulations data regardless the Sub Grid Scale models. However, the agreement is less satisfactory with relatively coarse grid without using any wall models and the differences between Sub Grid Scale models are distinguishable. Using local wall model retuned the basic flow topology and reduced significantly the differences between the coarse meshed Large-Eddy Simulations and Direct Numerical Simulations data. The results show that the ability of local wall model to predict the separation zone depends strongly on its implementation way.

  20. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  1. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  2. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    Science.gov (United States)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  3. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  4. Large-scale model-based assessment of deer-vehicle collision risk.

    Directory of Open Access Journals (Sweden)

    Torsten Hothorn

    Full Text Available Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining

  5. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  6. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  7. Numerically modelling the large scale coronal magnetic field

    Science.gov (United States)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  8. Ship detection using STFT sea background statistical modeling for large-scale oceansat remote sensing image

    Science.gov (United States)

    Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan

    2018-03-01

    Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.

  9. Model Predictive Control for Flexible Power Consumption of Large-Scale Refrigeration Systems

    DEFF Research Database (Denmark)

    Shafiei, Seyed Ehsan; Stoustrup, Jakob; Rasmussen, Henrik

    2014-01-01

    A model predictive control (MPC) scheme is introduced to directly control the electrical power consumption of large-scale refrigeration systems. Deviation from the baseline of the consumption is corresponded to the storing and delivering of thermal energy. By virtue of such correspondence...

  10. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    Science.gov (United States)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  11. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  12. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  13. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  14. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balance

    Science.gov (United States)

    Tang, G.; Bartlein, P. J.

    2012-01-01

    Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p 0.46, p 0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.

  15. RELAPS choked flow model and application to a large scale flow test

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1980-01-01

    The RELAP5 code was used to simulate a large scale choked flow test. The fluid system used in the test was modeled in RELAP5 using a uniform, but coarse, nodalization. The choked mass discharge rate was calculated using the RELAP5 choked flow model. The calulations were in good agreement with the test data, and the flow was calculated to be near thermal equilibrium

  16. United States Temperature and Precipitation Extremes: Phenomenology, Large-Scale Organization, Physical Mechanisms and Model Representation

    Science.gov (United States)

    Black, R. X.

    2017-12-01

    We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.

  17. Large Scale Cosmological Anomalies and Inhomogeneous Dark Energy

    Directory of Open Access Journals (Sweden)

    Leandros Perivolaropoulos

    2014-01-01

    Full Text Available A wide range of large scale observations hint towards possible modifications on the standard cosmological model which is based on a homogeneous and isotropic universe with a small cosmological constant and matter. These observations, also known as “cosmic anomalies” include unexpected Cosmic Microwave Background perturbations on large angular scales, large dipolar peculiar velocity flows of galaxies (“bulk flows”, the measurement of inhomogenous values of the fine structure constant on cosmological scales (“alpha dipole” and other effects. The presence of the observational anomalies could either be a large statistical fluctuation in the context of ΛCDM or it could indicate a non-trivial departure from the cosmological principle on Hubble scales. Such a departure is very much constrained by cosmological observations for matter. For dark energy however there are no significant observational constraints for Hubble scale inhomogeneities. In this brief review I discuss some of the theoretical models that can naturally lead to inhomogeneous dark energy, their observational constraints and their potential to explain the large scale cosmic anomalies.

  18. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  19. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  20. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  1. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  2. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  3. Laboratory astrophysics. Model experiments of astrophysics with large-scale lasers

    International Nuclear Information System (INIS)

    Takabe, Hideaki

    2012-01-01

    I would like to review the model experiment of astrophysics with high-power, large-scale lasers constructed mainly for laser nuclear fusion research. The four research directions of this new field named 'Laser Astrophysics' are described with four examples mainly promoted in our institute. The description is of magazine style so as to be easily understood by non-specialists. A new theory and its model experiment on the collisionless shock and particle acceleration observed in supernova remnants (SNRs) are explained in detail and its result and coming research direction are clarified. In addition, the vacuum breakdown experiment to be realized with the near future ultra-intense laser is also introduced. (author)

  4. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  5. A continental-scale hydrology and water quality model for Europe: Calibration and uncertainty of a high-resolution large-scale SWAT model

    Science.gov (United States)

    Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.

    2015-05-01

    A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.

  6. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  7. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  8. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  9. Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shuai Li

    2008-03-01

    Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  10. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  11. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    Science.gov (United States)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  12. Dynamic subgrid scale model used in a deep bundle turbulence prediction using the large eddy simulation method

    International Nuclear Information System (INIS)

    Barsamian, H.R.; Hassan, Y.A.

    1996-01-01

    Turbulence is one of the most commonly occurring phenomena of engineering interest in the field of fluid mechanics. Since most flows are turbulent, there is a significant payoff for improved predictive models of turbulence. One area of concern is the turbulent buffeting forces experienced by the tubes in steam generators of nuclear power plants. Although the Navier-Stokes equations are able to describe turbulent flow fields, the large number of scales of turbulence limit practical flow field calculations with current computing power. The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (Smagorinsky, 1963) (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  13. Modelling bark beetle disturbances in a large scale forest scenario model to assess climate change impacts and evaluate adaptive management strategies

    NARCIS (Netherlands)

    Seidl, R.; Schelhaas, M.J.; Lindner, M.; Lexer, M.J.

    2009-01-01

    To study potential consequences of climate-induced changes in the biotic disturbance regime at regional to national scale we integrated a model of Ips typographus (L. Scol. Col.) damages into the large-scale forest scenario model EFISCEN. A two-stage multivariate statistical meta-model was used to

  14. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    Science.gov (United States)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  15. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  16. Similitude and scaling of large structural elements: Case study

    Directory of Open Access Journals (Sweden)

    M. Shehadeh

    2015-06-01

    Full Text Available Scaled down models are widely used for experimental investigations of large structures due to the limitation in the capacities of testing facilities along with the expenses of the experimentation. The modeling accuracy depends upon the model material properties, fabrication accuracy and loading techniques. In the present work the Buckingham π theorem is used to develop the relations (i.e. geometry, loading and properties between the model and a large structural element as that is present in the huge existing petroleum oil drilling rigs. The model is to be designed, loaded and treated according to a set of similitude requirements that relate the model to the large structural element. Three independent scale factors which represent three fundamental dimensions, namely mass, length and time need to be selected for designing the scaled down model. Numerical prediction of the stress distribution within the model and its elastic deformation under steady loading is to be made. The results are compared with those obtained from the full scale structure numerical computations. The effect of scaled down model size and material on the accuracy of the modeling technique is thoroughly examined.

  17. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    Science.gov (United States)

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  18. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  19. Environmental Impacts of Large Scale Biochar Application Through Spatial Modeling

    Science.gov (United States)

    Huber, I.; Archontoulis, S.

    2017-12-01

    In an effort to study the environmental (emissions, soil quality) and production (yield) impacts of biochar application at regional scales we coupled the APSIM-Biochar model with the pSIMS parallel platform. So far the majority of biochar research has been concentrated on lab to field studies to advance scientific knowledge. Regional scale assessments are highly needed to assist decision making. The overall objective of this simulation study was to identify areas in the USA that have the most gain environmentally from biochar's application, as well as areas which our model predicts a notable yield increase due to the addition of biochar. We present the modifications in both APSIM biochar and pSIMS components that were necessary to facilitate these large scale model runs across several regions in the United States at a resolution of 5 arcminutes. This study uses the AgMERRA global climate data set (1980-2010) and the Global Soil Dataset for Earth Systems modeling as a basis for creating its simulations, as well as local management operations for maize and soybean cropping systems and different biochar application rates. The regional scale simulation analysis is in progress. Preliminary results showed that the model predicts that high quality soils (particularly those common to Iowa cropping systems) do not receive much, if any, production benefit from biochar. However, soils with low soil organic matter ( 0.5%) do get a noteworthy yield increase of around 5-10% in the best cases. We also found N2O emissions to be spatial and temporal specific; increase in some areas and decrease in some other areas due to biochar application. In contrast, we found increases in soil organic carbon and plant available water in all soils (top 30 cm) due to biochar application. The magnitude of these increases (% change from the control) were larger in soil with low organic matter (below 1.5%) and smaller in soils with high organic matter (above 3%) and also dependent on biochar

  20. Large eddy simulation of new subgrid scale model for three-dimensional bundle flows

    International Nuclear Information System (INIS)

    Barsamian, H.R.; Hassan, Y.A.

    2004-01-01

    Having led to increased inefficiencies and power plant shutdowns fluid flow induced vibrations within heat exchangers are of great concern due to tube fretting-wear or fatigue failures. Historically, scaling law and measurement accuracy problems were encountered for experimental analysis at considerable effort and expense. However, supercomputers and accurate numerical methods have provided reliable results and substantial decrease in cost. In this investigation Large Eddy Simulation has been successfully used to simulate turbulent flow by the numeric solution of the incompressible, isothermal, single phase Navier-Stokes equations. The eddy viscosity model and a new subgrid scale model have been utilized to model the smaller eddies in the flow domain. A triangular array flow field was considered and numerical simulations were performed in two- and three-dimensional fields, and were compared to experimental findings. Results show good agreement of the numerical findings to that of the experimental, and solutions obtained with the new subgrid scale model represent better energy dissipation for the smaller eddies. (author)

  1. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  2. Expanded Large-Scale Forcing Properties Derived from the Multiscale Data Assimilation System and Its Application to Single-Column Models

    Science.gov (United States)

    Feng, S.; Li, Z.; Liu, Y.; Lin, W.; Toto, T.; Vogelmann, A. M.; Fridlind, A. M.

    2013-12-01

    We present an approach to derive large-scale forcing that is used to drive single-column models (SCMs) and cloud resolving models (CRMs)/large eddy simulation (LES) for evaluating fast physics parameterizations in climate models. The forcing fields are derived by use of a newly developed multi-scale data assimilation (MS-DA) system. This DA system is developed on top of the NCEP Gridpoint Statistical Interpolation (GSI) System and is implemented in the Weather Research and Forecasting (WRF) model at a cloud resolving resolution of 2 km. This approach has been applied to the generation of large scale forcing for a set of Intensive Operation Periods (IOPs) over the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plains (SGP) site. The dense ARM in-situ observations and high-resolution satellite data effectively constrain the WRF model. The evaluation shows that the derived forcing displays accuracies comparable to the existing continuous forcing product and, overall, a better dynamic consistency with observed cloud and precipitation. One important application of this approach is to derive large-scale hydrometeor forcing and multiscale forcing, which is not provided in the existing continuous forcing product. It is shown that the hydrometeor forcing poses an appreciable impact on cloud and precipitation fields in the single-column model simulations. The large-scale forcing exhibits a significant dependency on domain-size that represents SCM grid-sizes. Subgrid processes often contribute a significant component to the large-scale forcing, and this contribution is sensitive to the grid-size and cloud-regime.

  3. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  4. Coupled climate model simulations of Mediterranean winter cyclones and large-scale flow patterns

    Directory of Open Access Journals (Sweden)

    B. Ziv

    2013-03-01

    Full Text Available The study aims to evaluate the ability of global, coupled climate models to reproduce the synoptic regime of the Mediterranean Basin. The output of simulations of the 9 models included in the IPCC CMIP3 effort is compared to the NCEP-NCAR reanalyzed data for the period 1961–1990. The study examined the spatial distribution of cyclone occurrence, the mean Mediterranean upper- and lower-level troughs, the inter-annual variation and trend in the occurrence of the Mediterranean cyclones, and the main large-scale circulation patterns, represented by rotated EOFs of 500 hPa and sea level pressure. The models reproduce successfully the two maxima in cyclone density in the Mediterranean and their locations, the location of the average upper- and lower-level troughs, the relative inter-annual variation in cyclone occurrences and the structure of the four leading large scale EOFs. The main discrepancy is the models' underestimation of the cyclone density in the Mediterranean, especially in its western part. The models' skill in reproducing the cyclone distribution is found correlated with their spatial resolution, especially in the vertical. The current improvement in model spatial resolution suggests that their ability to reproduce the Mediterranean cyclones would be improved as well.

  5. Can limited area NWP and/or RCM models improve on large scales inside their domain?

    Science.gov (United States)

    Mesinger, Fedor; Veljovic, Katarina

    2017-04-01

    In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales

  6. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen

    2007-01-01

    is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...... and experiments are both done for the grate boiler. The comparison between them shows an overall acceptable agreement in tendency. However at some measuring ports, big discrepancies between the modeling and the experiments are observed, mainly because the modeling-based boundary conditions (BCs) could differ...

  7. Structure of exotic nuclei by large-scale shell model calculations

    International Nuclear Information System (INIS)

    Utsuno, Yutaka; Otsuka, Takaharu; Mizusaki, Takahiro; Honma, Michio

    2006-01-01

    An extensive large-scale shell-model study is conducted for unstable nuclei around N = 20 and N = 28, aiming to investigate how the shell structure evolves from stable to unstable nuclei and affects the nuclear structure. The structure around N = 20 including the disappearance of the magic number is reproduced systematically, exemplified in the systematics of the electromagnetic moments in the Na isotope chain. As a key ingredient dominating the structure/shell evolution in the exotic nuclei from a general viewpoint, we pay attention to the tensor force. Including a proper strength of the tensor force in the effective interaction, we successfully reproduce the proton shell evolution ranging from N = 20 to 28 without any arbitrary modifications in the interaction and predict the ground state of 42Si to contain a large deformed component

  8. A dynamic global-coefficient mixed subgrid-scale model for large-eddy simulation of turbulent flows

    International Nuclear Information System (INIS)

    Singh, Satbir; You, Donghyun

    2013-01-01

    Highlights: ► A new SGS model is developed for LES of turbulent flows in complex geometries. ► A dynamic global-coefficient SGS model is coupled with a scale-similarity model. ► Overcome some of difficulties associated with eddy-viscosity closures. ► Does not require averaging or clipping of the model coefficient for stabilization. ► The predictive capability is demonstrated in a number of turbulent flow simulations. -- Abstract: A dynamic global-coefficient mixed subgrid-scale eddy-viscosity model for large-eddy simulation of turbulent flows in complex geometries is developed. In the present model, the subgrid-scale stress is decomposed into the modified Leonard stress, cross stress, and subgrid-scale Reynolds stress. The modified Leonard stress is explicitly computed assuming a scale similarity, while the cross stress and the subgrid-scale Reynolds stress are modeled using the global-coefficient eddy-viscosity model. The model coefficient is determined by a dynamic procedure based on the global-equilibrium between the subgrid-scale dissipation and the viscous dissipation. The new model relieves some of the difficulties associated with an eddy-viscosity closure, such as the nonalignment of the principal axes of the subgrid-scale stress tensor and the strain rate tensor and the anisotropy of turbulent flow fields, while, like other dynamic global-coefficient models, it does not require averaging or clipping of the model coefficient for numerical stabilization. The combination of the global-coefficient eddy-viscosity model and a scale-similarity model is demonstrated to produce improved predictions in a number of turbulent flow simulations

  9. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  10. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  11. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  12. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  13. Large-scale model of flow in heterogeneous and hierarchical porous media

    Science.gov (United States)

    Chabanon, Morgan; Valdés-Parada, Francisco J.; Ochoa-Tapia, J. Alberto; Goyeau, Benoît

    2017-11-01

    Heterogeneous porous structures are very often encountered in natural environments, bioremediation processes among many others. Reliable models for momentum transport are crucial whenever mass transport or convective heat occurs in these systems. In this work, we derive a large-scale average model for incompressible single-phase flow in heterogeneous and hierarchical soil porous media composed of two distinct porous regions embedding a solid impermeable structure. The model, based on the local mechanical equilibrium assumption between the porous regions, results in a unique momentum transport equation where the global effective permeability naturally depends on the permeabilities at the intermediate mesoscopic scales and therefore includes the complex hierarchical structure of the soil. The associated closure problem is numerically solved for various configurations and properties of the heterogeneous medium. The results clearly show that the effective permeability increases with the volume fraction of the most permeable porous region. It is also shown that the effective permeability is sensitive to the dimensionality spatial arrangement of the porous regions and in particular depends on the contact between the impermeable solid and the two porous regions.

  14. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  15. Scale breaking effects in the quark-parton model for large P perpendicular phenomena

    International Nuclear Information System (INIS)

    Baier, R.; Petersson, B.

    1977-01-01

    We discuss how the scaling violations suggested by an asymptotically free parton model, i.e., the Q 2 -dependence of the transverse momentum of partons within hadrons may affect the parton model description of large p perpendicular phenomena. We show that such a mechanism can provide an explanation for the magnitude of the opposite side correlations and their dependence on the trigger momentum. (author)

  16. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.

  17. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  18. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  19. Model abstraction addressing long-term simulations of chemical degradation of large-scale concrete structures

    International Nuclear Information System (INIS)

    Jacques, D.; Perko, J.; Seetharam, S.; Mallants, D.

    2012-01-01

    This paper presents a methodology to assess the spatial-temporal evolution of chemical degradation fronts in real-size concrete structures typical of a near-surface radioactive waste disposal facility. The methodology consists of the abstraction of a so-called full (complicated) model accounting for the multicomponent - multi-scale nature of concrete to an abstracted (simplified) model which simulates chemical concrete degradation based on a single component in the aqueous and solid phase. The abstracted model is verified against chemical degradation fronts simulated with the full model under both diffusive and advective transport conditions. Implementation in the multi-physics simulation tool COMSOL allows simulation of the spatial-temporal evolution of chemical degradation fronts in large-scale concrete structures. (authors)

  20. Large scale hydrogeological modelling of a low-lying complex coastal aquifer system

    DEFF Research Database (Denmark)

    Meyer, Rena

    2018-01-01

    intrusion. In this thesis a new methodological approach was developed to combine 3D numerical groundwater modelling with a detailed geological description and hydrological, geochemical and geophysical data. It was applied to a regional scale saltwater intrusion in order to analyse and quantify...... the groundwater flow dynamics, identify the driving mechanisms that formed the saltwater intrusion to its present extent and to predict its progression in the future. The study area is located in the transboundary region between Southern Denmark and Northern Germany, adjacent to the Wadden Sea. Here, a large-scale...... parametrization schemes that accommodate hydrogeological heterogeneities. Subsequently, density-dependent flow and transport modelling of multiple salt sources was successfully applied to simulate the formation of the saltwater intrusion during the last 4200 years, accounting for historic changes in the hydraulic...

  1. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  2. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    Science.gov (United States)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The

  3. Assessing Human Modifications to Floodplains using Large-Scale Hydrogeomorphic Floodplain Modeling

    Science.gov (United States)

    Morrison, R. R.; Scheel, K.; Nardi, F.; Annis, A.

    2017-12-01

    Human modifications to floodplains for water resource and flood management purposes have significantly transformed river-floodplain connectivity dynamics in many watersheds. Bridges, levees, reservoirs, shifts in land use, and other hydraulic engineering works have altered flow patterns and caused changes in the timing and extent of floodplain inundation processes. These hydrogeomorphic changes have likely resulted in negative impacts to aquatic habitat and ecological processes. The availability of large-scale topographic datasets at high resolution provide an opportunity for detecting anthropogenic impacts by means of geomorphic mapping. We have developed and are implementing a methodology for comparing a hydrogeomorphic floodplain mapping technique to hydraulically-modeled floodplain boundaries to estimate floodplain loss due to human activities. Our hydrogeomorphic mapping methodology assumes that river valley morphology intrinsically includes information on flood-driven erosion and depositional phenomena. We use a digital elevation model-based algorithm to identify the floodplain as the area of the fluvial corridor laying below water reference levels, which are estimated using a simplified hydrologic model. Results from our hydrogeomorphic method are compared to hydraulically-derived flood zone maps and spatial datasets of levee protected-areas to explore where water management features, such as levees, have changed floodplain dynamics and landscape features. Parameters associated with commonly used F-index functions are quantified and analyzed to better understand how floodplain areas have been reduced within a basin. Preliminary results indicate that the hydrogeomorphic floodplain model is useful for quickly delineating floodplains at large watershed scales, but further analyses are needed to understand the caveats for using the model in determining floodplain loss due to levees. We plan to continue this work by exploring the spatial dependencies of the F

  4. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  5. State of the Art in Large-Scale Soil Moisture Monitoring

    Science.gov (United States)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  6. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    DEFF Research Database (Denmark)

    Jensen, Tue Vissing; Pinson, Pierre

    2017-01-01

    , we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven...... to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecastingof renewable power generation....

  7. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  8. Cross-flow turbines: progress report on physical and numerical model studies at large laboratory scale

    Science.gov (United States)

    Wosnik, Martin; Bachant, Peter

    2016-11-01

    Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.

  9. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  10. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  11. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  12. Research on a Small Signal Stability Region Boundary Model of the Interconnected Power System with Large-Scale Wind Power

    Directory of Open Access Journals (Sweden)

    Wenying Liu

    2015-03-01

    Full Text Available For the interconnected power system with large-scale wind power, the problem of the small signal stability has become the bottleneck of restricting the sending-out of wind power as well as the security and stability of the whole power system. Around this issue, this paper establishes a small signal stability region boundary model of the interconnected power system with large-scale wind power based on catastrophe theory, providing a new method for analyzing the small signal stability. Firstly, we analyzed the typical characteristics and the mathematic model of the interconnected power system with wind power and pointed out that conventional methods can’t directly identify the topological properties of small signal stability region boundaries. For this problem, adopting catastrophe theory, we established a small signal stability region boundary model of the interconnected power system with large-scale wind power in two-dimensional power injection space and extended it to multiple dimensions to obtain the boundary model in multidimensional power injection space. Thirdly, we analyzed qualitatively the topological property’s changes of the small signal stability region boundary caused by large-scale wind power integration. Finally, we built simulation models by DIgSILENT/PowerFactory software and the final simulation results verified the correctness and effectiveness of the proposed model.

  13. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    Science.gov (United States)

    Jensen, Tue V.; Pinson, Pierre

    2017-11-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  14. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.

    Science.gov (United States)

    Jensen, Tue V; Pinson, Pierre

    2017-11-28

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  15. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    Science.gov (United States)

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  16. Model of large scale man-machine systems with an application to vessel traffic control

    NARCIS (Netherlands)

    Wewerinke, P.H.; van der Ent, W.I.; ten Hove, D.

    1989-01-01

    Mathematical models are discussed to deal with complex large-scale man-machine systems such as vessel (air, road) traffic and process control systems. Only interrelationships between subsystems are assumed. Each subsystem is controlled by a corresponding human operator (HO). Because of the

  17. Diversity in the representation of large-scale circulation associated with ENSO-Indian summer monsoon teleconnections in CMIP5 models

    Science.gov (United States)

    Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.

    2018-04-01

    Realistic simulation of large-scale circulation patterns associated with El Niño-Southern Oscillation (ENSO) is vital in coupled models in order to represent teleconnections to different regions of globe. The diversity in representing large-scale circulation patterns associated with ENSO-Indian summer monsoon (ISM) teleconnections in 23 Coupled Model Intercomparison Project Phase 5 (CMIP5) models is examined. CMIP5 models have been classified into three groups based on the correlation between Niño3.4 sea surface temperature (SST) index and ISM rainfall anomalies, models in group 1 (G1) overestimated El Niño-ISM teleconections and group 3 (G3) models underestimated it, whereas these teleconnections are better represented in group 2 (G2) models. Results show that in G1 models, El Niño-induced Tropical Indian Ocean (TIO) SST anomalies are not well represented. Anomalous low-level anticyclonic circulation anomalies over the southeastern TIO and western subtropical northwest Pacific (WSNP) cyclonic circulation are shifted too far west to 60° E and 120° E, respectively. This bias in circulation patterns implies dry wind advection from extratropics/midlatitudes to Indian subcontinent. In addition to this, large-scale upper level convergence together with lower level divergence over ISM region corresponding to El Niño are stronger in G1 models than in observations. Thus, unrealistic shift in low-level circulation centers corroborated by upper level circulation changes are responsible for overestimation of ENSO-ISM teleconnections in G1 models. Warm Pacific SST anomalies associated with El Niño are shifted too far west in many G3 models unlike in the observations. Further large-scale circulation anomalies over the Pacific and ISM region are misrepresented during El Niño years in G3 models. Too strong upper-level convergence away from Indian subcontinent and too weak WSNP cyclonic circulation are prominent in most of G3 models in which ENSO-ISM teleconnections are

  18. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    Energy Technology Data Exchange (ETDEWEB)

    RODRIGUEZ, MARKO A. [Los Alamos National Laboratory; BOLLEN, JOHAN [Los Alamos National Laboratory; VAN DE SOMPEL, HERBERT [Los Alamos National Laboratory

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  19. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  20. Scheduling of power generation a large-scale mixed-variable model

    CERN Document Server

    Prékopa, András; Strazicky, Beáta; Deák, István; Hoffer, János; Németh, Ágoston; Potecz, Béla

    2014-01-01

    The book contains description of a real life application of modern mathematical optimization tools in an important problem solution for power networks. The objective is the modelling and calculation of optimal daily scheduling of power generation, by thermal power plants,  to satisfy all demands at minimum cost, in such a way that the  generation and transmission capacities as well as the demands at the nodes of the system appear in an integrated form. The physical parameters of the network are also taken into account. The obtained large-scale mixed variable problem is relaxed in a smart, practical way, to allow for fast numerical solution of the problem.

  1. Spatiotemporal property and predictability of large-scale human mobility

    Science.gov (United States)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  2. Large-scale modeling on the fate and transport of polycyclic aromatic hydrocarbons (PAHs) in multimedia over China

    Science.gov (United States)

    Huang, Y.; Liu, M.; Wada, Y.; He, X.; Sun, X.

    2017-12-01

    In recent decades, with rapid economic growth, industrial development and urbanization, expanding pollution of polycyclic aromatic hydrocarbons (PAHs) has become a diversified and complicated phenomenon in China. However, the availability of sufficient monitoring activities for PAHs in multi-compartment and the corresponding multi-interface migration processes are still limited, especially at a large geographic area. In this study, we couple the Multimedia Fate Model (MFM) to the Community Multi-Scale Air Quality (CMAQ) model in order to consider the fugacity and the transient contamination processes. This coupled dynamic contaminant model can evaluate the detailed local variations and mass fluxes of PAHs in different environmental media (e.g., air, surface film, soil, sediment, water and vegetation) across different spatial (a county to country) and temporal (days to years) scales. This model has been applied to a large geographical domain of China at a 36 km by 36 km grid resolution. The model considers response characteristics of typical environmental medium to complex underlying surface. Results suggest that direct emission is the main input pathway of PAHs entering the atmosphere, while advection is the main outward flow of pollutants from the environment. In addition, both soil and sediment act as the main sink of PAHs and have the longest retention time. Importantly, the highest PAHs loadings are found in urbanized and densely populated regions of China, such as Yangtze River Delta and Pearl River Delta. This model can provide a good scientific basis towards a better understanding of the large-scale dynamics of environmental pollutants for land conservation and sustainable development. In a next step, the dynamic contaminant model will be integrated with the continental-scale hydrological and water resources model (i.e., Community Water Model, CWatM) to quantify a more accurate representation and feedbacks between the hydrological cycle and water quality at

  3. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    Science.gov (United States)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of

  4. Large-scale shell model calculations for the N=126 isotones Po-Pu

    International Nuclear Information System (INIS)

    Caurier, E.; Rejmund, M.; Grawe, H.

    2003-04-01

    Large-scale shell model calculations were performed in the full Z=82-126 proton model space π(Oh 9/2 , 1f 7/2 , Oi 13/2 , 2p 3/2 , 1f 5/2 , 2p 1/2 ) employing the code NATHAN. The modified Kuo-Herling interaction was used, no truncation was applied up to protactinium (Z=91) and seniority truncation beyond. The results are compared to experimental data including binding energies, level schemes and electromagnetic transition rates. An overall excellent agreement is obtained for states that can be described in this model space. Limitations of the approach with respect to excitations across the Z=82 and N=126 shells and deficiencies of the interaction are discussed. (orig.)

  5. Stability of large scale interconnected dynamical systems

    International Nuclear Information System (INIS)

    Akpan, E.P.

    1993-07-01

    Large scale systems modelled by a system of ordinary differential equations are considered and necessary and sufficient conditions are obtained for the uniform asymptotic connective stability of the systems using the method of cone-valued Lyapunov functions. It is shown that this model significantly improves the existing models. (author). 9 refs

  6. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    Science.gov (United States)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  7. Spatial scale separation in regional climate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Feser, F.

    2005-07-01

    In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter

  8. Model-based plant-wide optimization of large-scale lignocellulosic bioethanol plants

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Blanke, Mogens; Jakobsen, Jon Geest

    2017-01-01

    Second generation biorefineries transform lignocellulosic biomass into chemicals with higher added value following a conversion mechanism that consists of: pretreatment, enzymatic hydrolysis, fermentation and purification. The objective of this study is to identify the optimal operational point...... with respect to maximum economic profit of a large scale biorefinery plant using a systematic model-based plantwide optimization methodology. The following key process parameters are identified as decision variables: pretreatment temperature, enzyme dosage in enzymatic hydrolysis, and yeast loading per batch...... in fermentation. The plant is treated in an integrated manner taking into account the interactions and trade-offs between the conversion steps. A sensitivity and uncertainty analysis follows at the optimal solution considering both model and feed parameters. It is found that the optimal point is more sensitive...

  9. Understanding dynamics of large-scale atmospheric vortices with moist-convective shallow water model

    International Nuclear Information System (INIS)

    Rostami, M.; Zeitlin, V.

    2016-01-01

    Atmospheric jets and vortices which, together with inertia-gravity waves, constitute the principal dynamical entities of large-scale atmospheric motions, are well described in the framework of one- or multi-layer rotating shallow water models, which are obtained by vertically averaging of full “primitive” equations. There is a simple and physically consistent way to include moist convection in these models by adding a relaxational parameterization of precipitation and coupling precipitation with convective fluxes with the help of moist enthalpy conservation. We recall the construction of moist-convective rotating shallow water model (mcRSW) model and give an example of application to upper-layer atmospheric vortices. (paper)

  10. Double inflation: A possible resolution of the large-scale structure problem

    International Nuclear Information System (INIS)

    Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.

    1986-11-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Ω = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of ∼100 Mpc, while the small-scale structure over ≤ 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs

  11. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  12. Unified Tractable Model for Large-Scale Networks Using Stochastic Geometry: Analysis and Design

    KAUST Repository

    Afify, Laila H.

    2016-12-01

    The ever-growing demands for wireless technologies necessitate the evolution of next generation wireless networks that fulfill the diverse wireless users requirements. However, upscaling existing wireless networks implies upscaling an intrinsic component in the wireless domain; the aggregate network interference. Being the main performance limiting factor, it becomes crucial to develop a rigorous analytical framework to accurately characterize the out-of-cell interference, to reap the benefits of emerging networks. Due to the different network setups and key performance indicators, it is essential to conduct a comprehensive study that unifies the various network configurations together with the different tangible performance metrics. In that regard, the focus of this thesis is to present a unified mathematical paradigm, based on Stochastic Geometry, for large-scale networks with different antenna/network configurations. By exploiting such a unified study, we propose an efficient automated network design strategy to satisfy the desired network objectives. First, this thesis studies the exact aggregate network interference characterization, by accounting for each of the interferers signals in the large-scale network. Second, we show that the information about the interferers symbols can be approximated via the Gaussian signaling approach. The developed mathematical model presents twofold analysis unification for uplink and downlink cellular networks literature. It aligns the tangible decoding error probability analysis with the abstract outage probability and ergodic rate analysis. Furthermore, it unifies the analysis for different antenna configurations, i.e., various multiple-input multiple-output (MIMO) systems. Accordingly, we propose a novel reliable network design strategy that is capable of appropriately adjusting the network parameters to meet desired design criteria. In addition, we discuss the diversity-multiplexing tradeoffs imposed by differently favored

  13. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    Science.gov (United States)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  14. Energy transfers in large-scale and small-scale dynamos

    Science.gov (United States)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  15. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured ...

  16. Dynamic model of frequency control in Danish power system with large scale integration of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2013-01-01

    This work evaluates the impact of large scale integration of wind power in future power systems when 50% of load demand can be met from wind power. The focus is on active power balance control, where the main source of power imbalance is an inaccurate wind speed forecast. In this study, a Danish...... power system model with large scale of wind power is developed and a case study for an inaccurate wind power forecast is investigated. The goal of this work is to develop an adequate power system model that depicts relevant dynamic features of the power plants and compensates for load generation...... imbalances, caused by inaccurate wind speed forecast, by an appropriate control of the active power production from power plants....

  17. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...

  18. Large scale FCI experiments in subassembly geometry. Test facility and model experiments

    International Nuclear Information System (INIS)

    Beutel, H.; Gast, K.

    A program is outlined for the study of fuel/coolant interaction under SNR conditions. The program consists of a) under water explosion experiments with full size models of the SNR-core, in which the fuel/coolant system is simulated by a pyrotechnic mixture. b) large scale fuel/coolant interaction experiments with up to 5kg of molten UO 2 interacting with liquid sodium at 300 deg C to 600 deg C in a highly instrumented test facility simulating an SNR subassembly. The experimental results will be compared to theoretical models under development at Karlsruhe. Commencement of the experiments is expected for the beginning of 1975

  19. Operation Modeling of Power Systems Integrated with Large-Scale New Energy Power Sources

    Directory of Open Access Journals (Sweden)

    Hui Li

    2016-10-01

    Full Text Available In the most current methods of probabilistic power system production simulation, the output characteristics of new energy power generation (NEPG has not been comprehensively considered. In this paper, the power output characteristics of wind power generation and photovoltaic power generation are firstly analyzed based on statistical methods according to their historical operating data. Then the characteristic indexes and the filtering principle of the NEPG historical output scenarios are introduced with the confidence level, and the calculation model of NEPG’s credible capacity is proposed. Based on this, taking the minimum production costs or the best energy-saving and emission-reduction effect as the optimization objective, the power system operation model with large-scale integration of new energy power generation (NEPG is established considering the power balance, the electricity balance and the peak balance. Besides, the constraints of the operating characteristics of different power generation types, the maintenance schedule, the load reservation, the emergency reservation, the water abandonment and the transmitting capacity between different areas are also considered. With the proposed power system operation model, the operation simulations are carried out based on the actual Northwest power grid of China, which resolves the new energy power accommodations considering different system operating conditions. The simulation results well verify the validity of the proposed power system operation model in the accommodation analysis for the power system which is penetrated with large scale NEPG.

  20. Integrating an agent-based model into a large-scale hydrological model for evaluating drought management in California

    Science.gov (United States)

    Sheffield, J.; He, X.; Wada, Y.; Burek, P.; Kahil, M.; Wood, E. F.; Oppenheimer, M.

    2017-12-01

    California has endured record-breaking drought since winter 2011 and will likely experience more severe and persistent drought in the coming decades under changing climate. At the same time, human water management practices can also affect drought frequency and intensity, which underscores the importance of human behaviour in effective drought adaptation and mitigation. Currently, although a few large-scale hydrological and water resources models (e.g., PCR-GLOBWB) consider human water use and management practices (e.g., irrigation, reservoir operation, groundwater pumping), none of them includes the dynamic feedback between local human behaviors/decisions and the natural hydrological system. It is, therefore, vital to integrate social and behavioral dimensions into current hydrological modeling frameworks. This study applies the agent-based modeling (ABM) approach and couples it with a large-scale hydrological model (i.e., Community Water Model, CWatM) in order to have a balanced representation of social, environmental and economic factors and a more realistic representation of the bi-directional interactions and feedbacks in coupled human and natural systems. In this study, we focus on drought management in California and considers two types of agents, which are (groups of) farmers and state management authorities, and assumed that their corresponding objectives are to maximize the net crop profit and to maintain sufficient water supply, respectively. Farmers' behaviors are linked with local agricultural practices such as cropping patterns and deficit irrigation. More precisely, farmers' decisions are incorporated into CWatM across different time scales in terms of daily irrigation amount, seasonal/annual decisions on crop types and irrigated area as well as the long-term investment of irrigation infrastructure. This simulation-based optimization framework is further applied by performing different sets of scenarios to investigate and evaluate the effectiveness

  1. A large scale GIS geodatabase of soil parameters supporting the modeling of conservation practice alternatives in the United States

    Science.gov (United States)

    Water quality modeling requires across-scale support of combined digital soil elements and simulation parameters. This paper presents the unprecedented development of a large spatial scale (1:250,000) ArcGIS geodatabase coverage designed as a functional repository of soil-parameters for modeling an...

  2. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    OpenAIRE

    Sam Ali Al; Szasz Robert; Revstedt Johan

    2015-01-01

    The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simu...

  3. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  4. Large-Scale Atmospheric Circulation Patterns Associated with Temperature Extremes as a Basis for Model Evaluation: Methodological Overview and Results

    Science.gov (United States)

    Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.

    2015-12-01

    Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are

  5. A testing facility for large scale models at 100 bar and 3000C to 10000C

    International Nuclear Information System (INIS)

    Zemann, H.

    1978-07-01

    A testing facility for large scale model tests is in construction under support of the Austrian Industry. It will contain a Prestressed Concrete Pressure Vessel (PCPV) with hot linear (300 0 C at 100 bar), an electrical heating system (1.2 MW, 1000 0 C), a gas supply system, and a cooling system for the testing space. The components themselves are models for advanced high temperature applications. The first main component which was tested successfully was the PCPV. Basic investigation of the building materials, improvements of concrete gauges, large scale model tests and measurements within the structural concrete and on the liner from the beginning of construction during the period of prestressing, the period of stabilization and the final pressurizing tests have been made. On the basis of these investigations a computer controlled safety surveillance system for long term high pressure, high temperature tests has been developed. (author)

  6. Formation conditions, accumulation models and exploration direction of large-scale gas fields in Sinian-Cambrian, Sichuan Basin, China

    Directory of Open Access Journals (Sweden)

    Guoqi Wei

    2016-02-01

    Full Text Available According to comprehensive research on forming conditions including sedimentary facies, reservoirs, source rocks, and palaeo-uplift evolution of Sinian-Cambrian in Sichuan Basin, it is concluded that: (1 large-scale inherited palaeo-uplifts, large-scale intracratonic rifts, three widely-distributed high-quality source rocks, four widely-distributed karst reservoirs, and oil pyrolysis gas were all favorable conditions for large-scale and high-abundance accumulation; (2 diverse accumulation models were developed in different areas of the palaeo-uplift. In the core area of the inherited palaeo-uplift, “in-situ” pyrolysis accumulation model of paleo-reservoir was developed. On the other hand, in the slope area, pyrolysis accumulation model of dispersed liquid hydrocarbon was developed in the late stage structural trap; (3 there were different exploration directions in various areas of the palaeo-uplift. Within the core area of the palaeo-uplift, we mainly searched for the inherited paleo-structural trap which was also the foundation of lithological-strigraphic gas reservoirs. In the slope areas, we mainly searched for the giant structural trap formed in the Himalayan Period.

  7. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    Full Text Available Birds are highly mobile organisms and there is increasing evidence that studies at large spatial scales are needed if we are to properly understand their population dynamics. While classical metapopulation models have rarely proved useful for birds, more general metapopulation ideas involving collections of populations interacting within spatially structured landscapes are highly relevant (Harrison, 1994. There is increasing interest in understanding patterns of synchrony, or lack of synchrony, between populations and the environmental and dispersal mechanisms that bring about these patterns (Paradis et al., 2000. To investigate these processes we need to measure abundance, demographic rates and dispersal at large spatial scales, in addition to gathering data on relevant environmental variables. There is an increasing realisation that conservation needs to address rapid declines of common and widespread species (they will not remain so if such trends continue as well as the management of small populations that are at risk of extinction. While the knowledge needed to support the management of small populations can often be obtained from intensive studies in a few restricted areas, conservation of widespread species often requires information on population trends and processes measured at regional, national and continental scales (Baillie, 2001. While management prescriptions for widespread populations may initially be developed from a small number of local studies or experiments, there is an increasing need to understand how such results will scale up when applied across wider areas. There is also a vital role for monitoring at large spatial scales both in identifying such population declines and in assessing population recovery. Gathering data on avian abundance and demography at large spatial scales usually relies on the efforts of large numbers of skilled volunteers. Volunteer studies based on ringing (for example Constant Effort Sites [CES

  8. Identification of low order models for large scale processes

    NARCIS (Netherlands)

    Wattamwar, S.K.

    2010-01-01

    Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand

  9. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  10. Large-scale derived flood frequency analysis based on continuous simulation

    Science.gov (United States)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  11. Evaluation model of project complexity for large-scale construction projects in Iran - A Fuzzy ANP approach

    Directory of Open Access Journals (Sweden)

    Aliyeh Kazemi

    2016-09-01

    Full Text Available Construction projects have always been complex. By growing trend of this complexity, implementations of large-scale constructions become harder. Hence, evaluating and understanding these complexities are critical. Correct evaluation of a project complication can provide executives and managers with good source to use. Fuzzy analytic network process (ANP is a logical and systematic approach toward defining, evaluation, and grading. This method allows for analyzing complex systems, and determining complexity of them. In this study, by taking advantage of fuzzy ANP, effective indexes for development of complications in large-scale construction projects in Iran have been determined and prioritized. The results show socio-political, project system interdependencies, and technological complexity indexes ranked top to three. Furthermore, in comparison of three main huge projects: commercial-administrative, hospital, and skyscrapers, the hospital project had been evaluated as the most complicated. This model is beneficial for professionals in managing large-scale projects.

  12. A mixed-layer model study of the stratocumulus response to changes in large-scale conditions

    NARCIS (Netherlands)

    De Roode, S.R.; Siebesma, A.P.; Dal Gesso, S.; Jonker, H.J.J.; Schalkwijk, J.; Sival, J.

    2014-01-01

    A mixed-layer model is used to study the response of stratocumulus equilibrium state solutions to perturbations of cloud controlling factors which include the sea surface temperature, the specific humidity and temperature in the free troposphere, as well as the large-scale divergence and horizontal

  13. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  14. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  15. Comparison of vibration test results for Atucha II NPP and large scale concrete block models

    International Nuclear Information System (INIS)

    Iizuka, S.; Konno, T.; Prato, C.A.

    2001-01-01

    In order to study the soil structure interaction of reactor building that could be constructed on a Quaternary soil, a comparison study of the soil structure interaction springs was performed between full scale vibration test results of Atucha II NPP and vibration test results of large scale concrete block models constructed on Quaternary soil. This comparison study provides a case data of soil structure interaction springs on Quaternary soil with different foundation size and stiffness. (author)

  16. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  17. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  18. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  19. Adaptive Texture Synthesis for Large Scale City Modeling

    Science.gov (United States)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  20. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  1. Investigation of the large scale regional hydrogeological situation at Ceberg

    International Nuclear Information System (INIS)

    Boghammar, A.; Grundfelt, B.; Hartley, L.

    1997-11-01

    The present study forms part of the large-scale groundwater flow studies within the SR 97 project. The site of interest is Ceberg. Within the present study two different regional scale groundwater models have been constructed, one large regional model with an areal extent of about 300 km 2 and one semi-regional model with an areal extent of about 50 km 2 . Different types of boundary conditions have been applied to the models. Topography driven pressures, constant infiltration rates, non-linear infiltration combined specified pressure boundary conditions, and transfer of groundwater pressures from the larger model to the semi-regional model. The present model has shown that: -Groundwater flow paths are mainly local. Large-scale groundwater flow paths are only seen below the depth of the hypothetical repository (below 500 meters) and are very slow. -Locations of recharge and discharge, to and from the site area are in the close vicinity of the site. -The low contrast between major structures and the rock mass means that the factor having the major effect on the flowpaths is the topography. -A sufficiently large model, to incorporate the recharge and discharge areas to the local site is in the order of kilometres. -A uniform infiltration rate boundary condition does not give a good representation of the groundwater movements in the model. -A local site model may be located to cover the site area and a few kilometers of the surrounding region. In order to incorporate all recharge and discharge areas within the site model, the model will be somewhat larger than site scale models at other sites. This is caused by the fact that the discharge areas are divided into three distinct areas to the east, south and west of the site. -Boundary conditions may be supplied to the site model by means of transferring groundwater pressures obtained with the semi-regional model

  2. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  3. Tidal-induced large-scale regular bed form patterns in a three-dimensional shallow water model

    NARCIS (Netherlands)

    Hulscher, Suzanne J.M.H.

    1996-01-01

    The three-dimensional model presented in this paper is used to study how tidal currents form wave-like bottom patterns. Inclusion of vertical flow structure turns out to be necessary to describe the formation, or absence, of all known large-scale regular bottom features. The tide and topography are

  4. The use of soil moisture - remote sensing products for large-scale groundwater modeling and assessment

    NARCIS (Netherlands)

    Sutanudjaja, E.H.

    2012-01-01

    In this thesis, the possibilities of using spaceborne remote sensing for large-scale groundwater modeling are explored. We focus on a soil moisture product called European Remote Sensing Soil Water Index (ERS SWI, Wagner et al., 1999) - representing the upper profile soil moisture. As a test-bed, we

  5. Probing cosmology with the homogeneity scale of the Universe through large scale structure surveys

    International Nuclear Information System (INIS)

    Ntelis, Pierros

    2017-01-01

    This thesis exposes my contribution to the measurement of homogeneity scale using galaxies, with the cosmological interpretation of results. In physics, any model is characterized by a set of principles. Most models in cosmology are based on the Cosmological Principle, which states that the universe is statistically homogeneous and isotropic on a large scales. Today, this principle is considered to be true since it is respected by those cosmological models that accurately describe the observations. However, while the isotropy of the universe is now confirmed by many experiments, it is not the case for the homogeneity. To study cosmic homogeneity, we propose to not only test a model but to test directly one of the postulates of modern cosmology. Since 1998 the measurements of cosmic distances using type Ia supernovae, we know that the universe is now in a phase of accelerated expansion. This phenomenon can be explained by the addition of an unknown energy component, which is called dark energy. Since dark energy is responsible for the expansion of the universe, we can study this mysterious fluid by measuring the rate of expansion of the universe. The universe has imprinted in its matter distribution a standard ruler, the Baryon Acoustic Oscillation (BAO) scale. By measuring this scale at different times during the evolution of our universe, it is then possible to measure the rate of expansion of the universe and thus characterize this dark energy. Alternatively, we can use the homogeneity scale to study this dark energy. Studying the homogeneity and the BAO scale requires the statistical study of the matter distribution of the universe at large scales, superior to tens of Mega-parsecs. Galaxies and quasars are formed in the vast over densities of matter and they are very luminous: these sources trace the distribution of matter. By measuring the emission spectra of these sources using large spectroscopic surveys, such as BOSS and eBOSS, we can measure their positions

  6. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  7. Long-term modelling of Carbon Capture and Storage, Nuclear Fusion, and large-scale District Heating

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik; Korsholm, Søren Bang; Lüthje, Mikael

    2011-01-01

    before 2050. The modelling tools developed by the International Energy Agency (IEA) Implementing Agreement ETSAP include both multi-regional global and long-term energy models till 2100, as well as national or regional models with shorter time horizons. Examples are the EFDA-TIMES model, focusing...... on nuclear fusion and the Pan European TIMES model, respectively. In the next decades CCS can be a driver for the development and expansion of large-scale district heating systems, which are currently widespread in Europe, Korea and China, and with large potentials in North America. If fusion will replace...... fossil fuel power plants with CCS in the second half of the century, the same infrastructure for heat distribution can be used which will support the penetration of both technologies. This paper will address the issue of infrastructure development and the use of CCS and fusion technologies using...

  8. Imprint of thawing scalar fields on the large scale galaxy overdensity

    Science.gov (United States)

    Dinda, Bikash R.; Sen, Anjan A.

    2018-04-01

    We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and the perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from the Λ CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results from the enhancement of power from Λ CDM on small scales, whereas the inclusion of general relativistic (GR) corrections results in the suppression of power from Λ CDM on large scales. This can be useful to distinguish scalar field models from Λ CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to the difference in background expansion.

  9. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  10. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  11. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  12. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  13. Measuring the topology of large-scale structure in the universe

    Science.gov (United States)

    Gott, J. Richard, III

    1988-11-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  14. Measuring the topology of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Gott, J.R. III

    1988-01-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data. 45 references

  15. Influence of weathering and pre-existing large scale fractures on gravitational slope failure: insights from 3-D physical modelling

    Directory of Open Access Journals (Sweden)

    D. Bachmann

    2004-01-01

    Full Text Available Using a new 3-D physical modelling technique we investigated the initiation and evolution of large scale landslides in presence of pre-existing large scale fractures and taking into account the slope material weakening due to the alteration/weathering. The modelling technique is based on the specially developed properly scaled analogue materials, as well as on the original vertical accelerator device enabling increases in the 'gravity acceleration' up to a factor 50. The weathering primarily affects the uppermost layers through the water circulation. We simulated the effect of this process by making models of two parts. The shallower one represents the zone subject to homogeneous weathering and is made of low strength material of compressive strength σl. The deeper (core part of the model is stronger and simulates intact rocks. Deformation of such a model subjected to the gravity force occurred only in its upper (low strength layer. In another set of experiments, low strength (σw narrow planar zones sub-parallel to the slope surface (σwl were introduced into the model's superficial low strength layer to simulate localized highly weathered zones. In this configuration landslides were initiated much easier (at lower 'gravity force', were shallower and had smaller horizontal size largely defined by the weak zone size. Pre-existing fractures were introduced into the model by cutting it along a given plan. They have proved to be of small influence on the slope stability, except when they were associated to highly weathered zones. In this latter case the fractures laterally limited the slides. Deep seated rockslides initiation is thus directly defined by the mechanical structure of the hillslope's uppermost levels and especially by the presence of the weak zones due to the weathering. The large scale fractures play a more passive role and can only influence the shape and the volume of the sliding units.

  16. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  17. Comparison of Multi-Scale Digital Elevation Models for Defining Waterways and Catchments Over Large Areas

    Science.gov (United States)

    Harris, B.; McDougall, K.; Barry, M.

    2012-07-01

    Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.

  18. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    Science.gov (United States)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  19. Large Scale Visual Recommendations From Street Fashion Images

    OpenAIRE

    Jagadeesh, Vignesh; Piramuthu, Robinson; Bhardwaj, Anurag; Di, Wei; Sundaresan, Neel

    2014-01-01

    We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose four data driven models in the form of Complementary Nearest Neighbor Consensus, Gaussian Mixture Models, Texture Agnostic Retrieval and Markov Chain LDA for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive e...

  20. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    Science.gov (United States)

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  1. Analysis of effectiveness of possible queuing models at gas stations using the large-scale queuing theory

    Directory of Open Access Journals (Sweden)

    Slaviša M. Ilić

    2011-10-01

    Full Text Available This paper analyzes the effectiveness of possible models for queuing at gas stations, using a mathematical model of the large-scale queuing theory. Based on actual data collected and the statistical analysis of the expected intensity of vehicle arrivals and queuing at gas stations, the mathematical modeling of the real process of queuing was carried out and certain parameters quantified, in terms of perception of the weaknesses of the existing models and the possible benefits of an automated queuing model.

  2. Hierarchical Cantor set in the large scale structure with torus geometry

    Energy Technology Data Exchange (ETDEWEB)

    Murdzek, R. [Physics Department, ' Al. I. Cuza' University, Blvd. Carol I, Nr. 11, Iassy 700506 (Romania)], E-mail: rmurdzek@yahoo.com

    2008-12-15

    The formation of large scale structures is considered within a model with string on toroidal space-time. Firstly, the space-time geometry is presented. In this geometry, the Universe is represented by a string describing a torus surface. Thereafter, the large scale structure of the Universe is derived from the string oscillations. The results are in agreement with the cellular structure of the large scale distribution and with the theory of a Cantorian space-time.

  3. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  4. Inflationary tensor fossils in large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Dimastrogiovanni, Emanuela [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Fasiello, Matteo [Department of Physics, Case Western Reserve University, Cleveland, OH 44106 (United States); Jeong, Donghui [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Kamionkowski, Marc, E-mail: ema@physics.umn.edu, E-mail: mrf65@case.edu, E-mail: duj13@psu.edu, E-mail: kamion@jhu.edu [Department of Physics and Astronomy, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218 (United States)

    2014-12-01

    Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to be satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.

  5. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  6. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  7. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    International Nuclear Information System (INIS)

    Schroeder, William J.

    2011-01-01

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem

  8. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    Energy Technology Data Exchange (ETDEWEB)

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally

  9. ADAPTIVE TEXTURE SYNTHESIS FOR LARGE SCALE CITY MODELING

    Directory of Open Access Journals (Sweden)

    G. Despine

    2015-02-01

    Full Text Available Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  10. Nonlinear evolution of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Frenk, C.S.; White, S.D.M.; Davis, M.

    1983-01-01

    Using N-body simulations we study the nonlinear development of primordial density perturbation in an Einstein--de Sitter universe. We compare the evolution of an initial distribution without small-scale density fluctuations to evolution from a random Poisson distribution. These initial conditions mimic the assumptions of the adiabatic and isothermal theories of galaxy formation. The large-scale structures which form in the two cases are markedly dissimilar. In particular, the correlation function xi(r) and the visual appearance of our adiabatic (or ''pancake'') models match better the observed distribution of galaxies. This distribution is characterized by large-scale filamentary structure. Because the pancake models do not evolve in a self-similar fashion, the slope of xi(r) steepens with time; as a result there is a unique epoch at which these models fit the galaxy observations. We find the ratio of cutoff length to correlation length at this time to be lambda/sub min//r 0 = 5.1; its expected value in a neutrino dominated universe is 4(Ωh) -1 (H 0 = 100h km s -1 Mpc -1 ). At early epochs these models predict a negligible amplitude for xi(r) and could explain the lack of measurable clustering in the Lyα absorption lines of high-redshift quasars. However, large-scale structure in our models collapses after z = 2. If this collapse precedes galaxy formation as in the usual pancake theory, galaxies formed uncomfortably recently. The extent of this problem may depend on the cosmological model used; the present series of experiments should be extended in the future to include models with Ω<1

  11. Power suppression at large scales in string inflation

    Energy Technology Data Exchange (ETDEWEB)

    Cicoli, Michele [Dipartimento di Fisica ed Astronomia, Università di Bologna, via Irnerio 46, Bologna, 40126 (Italy); Downes, Sean; Dutta, Bhaskar, E-mail: mcicoli@ictp.it, E-mail: sddownes@physics.tamu.edu, E-mail: dutta@physics.tamu.edu [Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX, 77843-4242 (United States)

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  12. Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.

    Science.gov (United States)

    Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S

    2016-09-26

    In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Some Statistics for Measuring Large-Scale Structure

    OpenAIRE

    Brandenberger, Robert H.; Kaplan, David M.; A, Stephen; Ramsey

    1993-01-01

    Good statistics for measuring large-scale structure in the Universe must be able to distinguish between different models of structure formation. In this paper, two and three dimensional ``counts in cell" statistics and a new ``discrete genus statistic" are applied to toy versions of several popular theories of structure formation: random phase cold dark matter model, cosmic string models, and global texture scenario. All three statistics appear quite promising in terms of differentiating betw...

  14. Modeling and simulation of large scale stirred tank

    Science.gov (United States)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  15. Large-eddy simulation with accurate implicit subgrid-scale diffusion

    NARCIS (Netherlands)

    B. Koren (Barry); C. Beets

    1996-01-01

    textabstractA method for large-eddy simulation is presented that does not use an explicit subgrid-scale diffusion term. Subgrid-scale effects are modelled implicitly through an appropriate monotone (in the sense of Spekreijse 1987) discretization method for the advective terms. Special attention is

  16. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  17. Large scale sodium-water reaction tests for Monju steam generators

    International Nuclear Information System (INIS)

    Sato, M.; Hiroi, H.; Hori, M.

    1976-01-01

    To demonstrate the safe design of the steam generator system of the prototype fast reactor Monju against the postulated large leak sodium-water reaction, a large scale test facility SWAT-3 was constructed. SWAT-3 is a 1/2.5 scale model of the Monju secondary loop on the basis of the iso-velocity modeling. Two tests have been conducted in SWAT-3 since its construction. The test items using SWAT-3 are discussed, and the description of the facility and the test results are presented

  18. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP

    Science.gov (United States)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-01

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version of the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. Other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.

  19. COMPARISON OF MULTI-SCALE DIGITAL ELEVATION MODELS FOR DEFINING WATERWAYS AND CATCHMENTS OVER LARGE AREAS

    Directory of Open Access Journals (Sweden)

    B. Harris

    2012-07-01

    Full Text Available Digital Elevation Models (DEMs allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas are adequate for the creation of waterways and catchments at a regional scale.

  20. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  1. Large-scale Modeling of Nitrous Oxide Production: Issues of Representing Spatial Heterogeneity

    Science.gov (United States)

    Morris, C. K.; Knighton, J.

    2017-12-01

    Nitrous oxide is produced from the biological processes of nitrification and denitrification in terrestrial environments and contributes to the greenhouse effect that warms Earth's climate. Large scale modeling can be used to determine how global rate of nitrous oxide production and consumption will shift under future climates. However, accurate modeling of nitrification and denitrification is made difficult by highly parameterized, nonlinear equations. Here we show that the representation of spatial heterogeneity in inputs, specifically soil moisture, causes inaccuracies in estimating the average nitrous oxide production in soils. We demonstrate that when soil moisture is averaged from a spatially heterogeneous surface, net nitrous oxide production is under predicted. We apply this general result in a test of a widely-used global land surface model, the Community Land Model v4.5. The challenges presented by nonlinear controls on nitrous oxide are highlighted here to provide a wider context to the problem of extraordinary denitrification losses in CLM. We hope that these findings will inform future researchers on the possibilities for model improvement of the global nitrogen cycle.

  2. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and

  3. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    Science.gov (United States)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  4. Large-scale hydrogen production using nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ryland, D.; Stolberg, L.; Kettner, A.; Gnanapragasam, N.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    For many years, Atomic Energy of Canada Limited (AECL) has been studying the feasibility of using nuclear reactors, such as the Supercritical Water-cooled Reactor, as an energy source for large scale hydrogen production processes such as High Temperature Steam Electrolysis and the Copper-Chlorine thermochemical cycle. Recent progress includes the augmentation of AECL's experimental capabilities by the construction of experimental systems to test high temperature steam electrolysis button cells at ambient pressure and temperatures up to 850{sup o}C and CuCl/HCl electrolysis cells at pressures up to 7 bar and temperatures up to 100{sup o}C. In parallel, detailed models of solid oxide electrolysis cells and the CuCl/HCl electrolysis cell are being refined and validated using experimental data. Process models are also under development to assess options for economic integration of these hydrogen production processes with nuclear reactors. Options for large-scale energy storage, including hydrogen storage, are also under study. (author)

  5. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  6. Large-scale weakly supervised object localization via latent category learning.

    Science.gov (United States)

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  7. Obtaining high-resolution stage forecasts by coupling large-scale hydrologic models with sensor data

    Science.gov (United States)

    Fries, K. J.; Kerkez, B.

    2017-12-01

    We investigate how "big" quantities of distributed sensor data can be coupled with a large-scale hydrologic model, in particular the National Water Model (NWM), to obtain hyper-resolution forecasts. The recent launch of the NWM provides a great example of how growing computational capacity is enabling a new generation of massive hydrologic models. While the NWM spans an unprecedented spatial extent, there remain many questions about how to improve forecast at the street-level, the resolution at which many stakeholders make critical decisions. Further, the NWM runs on supercomputers, so water managers who may have access to their own high-resolution measurements may not readily be able to assimilate them into the model. To that end, we ask the question: how can the advances of the large-scale NWM be coupled with new local observations to enable hyper-resolution hydrologic forecasts? A methodology is proposed whereby the flow forecasts of the NWM are directly mapped to high-resolution stream levels using Dynamical System Identification. We apply the methodology across a sensor network of 182 gages in Iowa. Of these sites, approximately one third have shown to perform well in high-resolution flood forecasting when coupled with the outputs of the NWM. The quality of these forecasts is characterized using Principal Component Analysis and Random Forests to identify where the NWM may benefit from new sources of local observations. We also discuss how this approach can help municipalities identify where they should place low-cost sensors to most benefit from flood forecasts of the NWM.

  8. Contribution of large scale coherence to wind turbine power: A large eddy simulation study in periodic wind farms

    Science.gov (United States)

    Chatterjee, Tanmoy; Peet, Yulia T.

    2018-03-01

    Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.

  9. Cosmological streaming velocities and large-scale density maxima

    International Nuclear Information System (INIS)

    Peacock, J.A.; Lumsden, S.L.; Heavens, A.F.

    1987-01-01

    The statistical testing of models for galaxy formation against the observed peculiar velocities on 10-100 Mpc scales is considered. If it is assumed that observers are likely to be sited near maxima in the primordial field of density perturbations, then the observed filtered velocity field will be biased to low values by comparison with a point selected at random. This helps to explain how the peculiar velocities (relative to the microwave background) of the local supercluster and the Rubin-Ford shell can be so similar in magnitude. Using this assumption to predict peculiar velocities on two scales, we test models with large-scale damping (i.e. adiabatic perturbations). Allowed models have a damping length close to the Rubin-Ford scale and are mildly non-linear. Both purely baryonic universes and universes dominated by massive neutrinos can account for the observed velocities, provided 0.1 ≤ Ω ≤ 1. (author)

  10. Burnout of pulverized biomass particles in large scale boiler - Single particle model approach

    Energy Technology Data Exchange (ETDEWEB)

    Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero [VTT Technical Research Centre of Finland, Box 1603, 40101 Jyvaeskylae (Finland); Soerensen, Lasse Holst [ReaTech/ReAddit, Frederiksborgsveij 399, Niels Bohr, DK-4000 Roskilde (Denmark); Clausen, Soennik [Risoe National Laboratory, DK-4000 Roskilde (Denmark); Berg, Mogens [ENERGI E2 A/S, A.C. Meyers Vaenge 9, DK-2450 Copenhagen SV (Denmark)

    2010-05-15

    Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout. (author)

  11. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  12. Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media

    International Nuclear Information System (INIS)

    Ababou, R.

    1991-08-01

    This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs

  13. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  14. Large scale structure from the Higgs fields of the supersymmetric standard model

    International Nuclear Information System (INIS)

    Bastero-Gil, M.; Di Clemente, V.; King, S.F.

    2003-01-01

    We propose an alternative implementation of the curvaton mechanism for generating the curvature perturbations which does not rely on a late decaying scalar decoupled from inflation dynamics. In our mechanism the supersymmetric Higgs scalars are coupled to the inflaton in a hybrid inflation model, and this allows the conversion of the isocurvature perturbations of the Higgs fields to the observed curvature perturbations responsible for large scale structure to take place during reheating. We discuss an explicit model which realizes this mechanism in which the μ term in the Higgs superpotential is generated after inflation by the vacuum expectation value of a singlet field. The main prediction of the model is that the spectral index should deviate significantly from unity, vertical bar n-1 vertical bar ∼0.1. We also expect relic isocurvature perturbations in neutralinos and baryons, but no significant departures from Gaussianity and no observable effects of gravity waves in the CMB spectrum

  15. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  16. Large eddy simulation of transitional flow in an idealized stenotic blood vessel: evaluation of subgrid scale models.

    Science.gov (United States)

    Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H

    2014-07-01

    In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.

  17. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  18. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif; Orakzai, Faisal Moeen; Abdelaziz, Ibrahim; Khayyat, Zuhair

    2017-01-01

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  19. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  20. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    Science.gov (United States)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model

  1. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    Science.gov (United States)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  2. Evaluating neighborhood structures for modeling intercity diffusion of large-scale dengue epidemics.

    Science.gov (United States)

    Wen, Tzai-Hung; Hsu, Ching-Shun; Hu, Ming-Che

    2018-05-03

    Dengue fever is a vector-borne infectious disease that is transmitted by contact between vector mosquitoes and susceptible hosts. The literature has addressed the issue on quantifying the effect of individual mobility on dengue transmission. However, there are methodological concerns in the spatial regression model configuration for examining the effect of intercity-scale human mobility on dengue diffusion. The purposes of the study are to investigate the influence of neighborhood structures on intercity epidemic progression from pre-epidemic to epidemic periods and to compare definitions of different neighborhood structures for interpreting the spread of dengue epidemics. We proposed a framework for assessing the effect of model configurations on dengue incidence in 2014 and 2015, which were the most severe outbreaks in 70 years in Taiwan. Compared with the conventional model configuration in spatial regression analysis, our proposed model used a radiation model, which reflects population flow between townships, as a spatial weight to capture the structure of human mobility. The results of our model demonstrate better model fitting performance, indicating that the structure of human mobility has better explanatory power in dengue diffusion than the geometric structure of administration boundaries and geographic distance between centroids of cities. We also identified spatial-temporal hierarchy of dengue diffusion: dengue incidence would be influenced by its immediate neighboring townships during pre-epidemic and epidemic periods, and also with more distant neighbors (based on mobility) in pre-epidemic periods. Our findings suggest that the structure of population mobility could more reasonably capture urban-to-urban interactions, which implies that the hub cities could be a "bridge" for large-scale transmission and make townships that immediately connect to hub cities more vulnerable to dengue epidemics.

  3. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    Science.gov (United States)

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  4. Revisiting the EC/CMB model for extragalactic large scale jets

    Science.gov (United States)

    Lucchini, M.; Tavecchio, F.; Ghisellini, G.

    2017-04-01

    One of the most outstanding results of the Chandra X-ray Observatory was the discovery that AGN jets are bright X-ray emitters on very large scales, up to hundreds of kpc. Of these, the powerful and beamed jets of flat-spectrum radio quasars are particularly interesting, as the X-ray emission cannot be explained by an extrapolation of the lower frequency synchrotron spectrum. Instead, the most common model invokes inverse Compton scattering of photons of the cosmic microwave background (EC/CMB) as the mechanism responsible for the high-energy emission. The EC/CMB model has recently come under criticism, particularly because it should predict a significant steady flux in the MeV-GeV band which has not been detected by the Fermi/LAT telescope for two of the best studied jets (PKS 0637-752 and 3C273). In this work, we revisit some aspects of the EC/CMB model and show that electron cooling plays an important part in shaping the spectrum. This can solve the overproduction of γ-rays by suppressing the high-energy end of the emitting particle population. Furthermore, we show that cooling in the EC/CMB model predicts a new class of extended jets that are bright in X-rays but silent in the radio and optical bands. These jets are more likely to lie at intermediate redshifts and would have been missed in all previous X-ray surveys due to selection effects.

  5. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    Science.gov (United States)

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  6. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2014-01-01

    Full Text Available This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  7. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    Science.gov (United States)

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  8. Idealised modelling of storm surges in large-scale coastal basins

    NARCIS (Netherlands)

    Chen, Wenlong

    2015-01-01

    Coastal areas around the world are frequently attacked by various types of storms, threatening human life and property. This study aims to understand storm surge processes in large-scale coastal basins, particularly focusing on the influences of geometry, topography and storm characteristics on the

  9. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  10. The benefits of using remotely sensed soil moisture in parameter identification of large-scale hydrological models

    Science.gov (United States)

    Wanders, N.; Bierkens, M. F. P.; de Jong, S. M.; de Roo, A.; Karssenberg, D.

    2014-08-01

    Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system, in particular the unsaturated zone, remains uncalibrated. Soil moisture observations from satellites have the potential to fill this gap. Here we evaluate the added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: (1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? (2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to calibration based only on discharge observations, such that this leads to improved simulations of soil moisture content and discharge? A dual state and parameter Ensemble Kalman Filter is used to calibrate the hydrological model LISFLOOD for the Upper Danube. Calibration is done using discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS, and ASCAT. Calibration with discharge data improves the estimation of groundwater and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate identification of parameters related to land-surface processes. For the Upper Danube upstream area up to 40,000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30% in the RMSE for discharge simulations, compared to calibration on discharge alone. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models, leading to a better simulation of soil moisture content throughout the catchment and a better simulation of discharge in upstream areas. This article was corrected on 15 SEP 2014. See the end of the full text for details.

  11. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  12. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  13. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  14. Test of large-scale specimens and models as applied to NPP equipment materials

    International Nuclear Information System (INIS)

    Timofeev, B.T.; Karzov, G.P.

    1993-01-01

    The paper presents the test results on low-cycle fatigue, crack growth rate and fracture toughness of large-scale specimens and structures, manufactured from steel, widely applied in power engineering industry and used for the production of NPP equipment with VVER-440 and VVER-1000 reactors. The obtained results are compared with available test results of standard specimens and calculation relations, accepted in open-quotes Calculation Norms on Strength.close quotes At the fatigue crack initiation stage the experiments were performed on large-scale specimens of various geometry and configuration, which permitted to define 15X2MFA steel fracture initiation resistance by elastic-plastic deformation of large material volume by homogeneous and inhomogeneous state. Besides the above mentioned specimen tests in the regime of low-cycle loading, the test of models with nozzles were performed and a good correlation of the results on fatigue crack initiation criterium was obtained both with calculated data and standard low-cycle fatigue tests. It was noted that on the Paris part of the fatigue fracture diagram a specimen thickness increase does not influence fatigue crack growth resistance by tests in air both at 20 and 350 degrees C. The estimation of the comparability of the results, obtained on specimens and models was also carried out for this stage of fracture. At the stage of unstable crack growth by static loading the experiments were conducted on specimens of various thickness for 15X2MFA and 15X2NMFA steels and their welded joints, produced by submerged arc welding, in as-produced state (the beginning of service) and after embrittling heat treatment, simulating neutron fluence attack (the end of service). The obtained results give evidence of the possibility of the reliable prediction of structure elements brittle fracture using fracture toughness test results on relatively small standard specimens. 35 refs., 23 figs

  15. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  16. Large-Scale Mapping and Predictive Modeling of Submerged Aquatic Vegetation in a Shallow Eutrophic Lake

    Directory of Open Access Journals (Sweden)

    Karl E. Havens

    2002-01-01

    Full Text Available A spatially intensive sampling program was developed for mapping the submerged aquatic vegetation (SAV over an area of approximately 20,000 ha in a large, shallow lake in Florida, U.S. The sampling program integrates Geographic Information System (GIS technology with traditional field sampling of SAV and has the capability of producing robust vegetation maps under a wide range of conditions, including high turbidity, variable depth (0 to 2 m, and variable sediment types. Based on sampling carried out in AugustœSeptember 2000, we measured 1,050 to 4,300 ha of vascular SAV species and approximately 14,000 ha of the macroalga Chara spp. The results were similar to those reported in the early 1990s, when the last large-scale SAV sampling occurred. Occurrence of Chara was strongly associated with peat sediments, and maximal depths of occurrence varied between sediment types (mud, sand, rock, and peat. A simple model of Chara occurrence, based only on water depth, had an accuracy of 55%. It predicted occurrence of Chara over large areas where the plant actually was not found. A model based on sediment type and depth had an accuracy of 75% and produced a spatial map very similar to that based on observations. While this approach needs to be validated with independent data in order to test its general utility, we believe it may have application elsewhere. The simple modeling approach could serve as a coarse-scale tool for evaluating effects of water level management on Chara populations.

  17. Large-scale mapping and predictive modeling of submerged aquatic vegetation in a shallow eutrophic lake.

    Science.gov (United States)

    Havens, Karl E; Harwell, Matthew C; Brady, Mark A; Sharfstein, Bruce; East, Therese L; Rodusky, Andrew J; Anson, Daniel; Maki, Ryan P

    2002-04-09

    A spatially intensive sampling program was developed for mapping the submerged aquatic vegetation (SAV) over an area of approximately 20,000 ha in a large, shallow lake in Florida, U.S. The sampling program integrates Geographic Information System (GIS) technology with traditional field sampling of SAV and has the capability of producing robust vegetation maps under a wide range of conditions, including high turbidity, variable depth (0 to 2 m), and variable sediment types. Based on sampling carried out in August-September 2000, we measured 1,050 to 4,300 ha of vascular SAV species and approximately 14,000 ha of the macroalga Chara spp. The results were similar to those reported in the early 1990s, when the last large-scale SAV sampling occurred. Occurrence of Chara was strongly associated with peat sediments, and maximal depths of occurrence varied between sediment types (mud, sand, rock, and peat). A simple model of Chara occurrence, based only on water depth, had an accuracy of 55%. It predicted occurrence of Chara over large areas where the plant actually was not found. A model based on sediment type and depth had an accuracy of 75% and produced a spatial map very similar to that based on observations. While this approach needs to be validated with independent data in order to test its general utility, we believe it may have application elsewhere. The simple modeling approach could serve as a coarse-scale tool for evaluating effects of water level management on Chara populations.

  18. TOPOLOGY OF A LARGE-SCALE STRUCTURE AS A TEST OF MODIFIED GRAVITY

    International Nuclear Information System (INIS)

    Wang Xin; Chen Xuelei; Park, Changbom

    2012-01-01

    The genus of the isodensity contours is a robust measure of the topology of a large-scale structure, and it is relatively insensitive to nonlinear gravitational evolution, galaxy bias, and redshift-space distortion. We show that the growth of density fluctuations is scale dependent even in the linear regime in some modified gravity theories, which opens a new possibility of testing the theories observationally. We propose to use the genus of the isodensity contours, an intrinsic measure of the topology of the large-scale structure, as a statistic to be used in such tests. In Einstein's general theory of relativity, density fluctuations grow at the same rate on all scales in the linear regime, and the genus per comoving volume is almost conserved as structures grow homologously, so we expect that the genus-smoothing-scale relation is basically time independent. However, in some modified gravity models where structures grow with different rates on different scales, the genus-smoothing-scale relation should change over time. This can be used to test the gravity models with large-scale structure observations. We study the cases of the f(R) theory, DGP braneworld theory as well as the parameterized post-Friedmann models. We also forecast how the modified gravity models can be constrained with optical/IR or redshifted 21 cm radio surveys in the near future.

  19. Zone modelling of the thermal performances of a large-scale bloom reheating furnace

    International Nuclear Information System (INIS)

    Tan, Chee-Keong; Jenkins, Joana; Ward, John; Broughton, Jonathan; Heeley, Andy

    2013-01-01

    This paper describes the development and comparison of a two- (2D) and three-dimensional (3D) mathematical models, based on the zone method of radiation analysis, to simulate the thermal performances of a large bloom reheating furnace. The modelling approach adopted in the current paper differs from previous work since it takes into account the net radiation interchanges between the top and bottom firing sections of the furnace and also allows for enthalpy exchange due to the flows of combustion products between these sections. The models were initially validated at two different furnace throughput rates using experimental and plant's model data supplied by Tata Steel. The results to-date demonstrated that the model predictions are in good agreement with measured heating profiles of the blooms encountered in the actual furnace. It was also found no significant differences between the predictions from the 2D and 3D models. Following the validation, the 2D model was then used to assess the impact of the furnace responses to changing throughput rate. It was found that the potential furnace response to changing throughput rate influences the settling time of the furnace to the next steady state operation. Overall the current work demonstrates the feasibility and practicality of zone modelling and its potential for incorporation into a model based furnace control system. - Highlights: ► 2D and 3D zone models of large-scale bloom reheating furnace. ► The models were validated with experimental and plant model data. ► Examine the transient furnace response to changing the furnace throughput rates. ► No significant differences found between the predictions from the 2D and 3D models.

  20. Regional climate modeling: Should one attempt improving on the large scales? Lateral boundary condition scheme: Any impact?

    Energy Technology Data Exchange (ETDEWEB)

    Veljovic, Katarina; Rajkovic, Borivoj [Belgrade Univ. (RS). Inst. of Meteorology; Fennessy, Michael J.; Altshuler, Eric L. [Center for Ocean-Land-Atmosphere Studies, Calverton, MD (United States); Mesinger, Fedor [Maryland Univ., College Park (United States). Earth System Science Interdisciplinary Center; Serbian Academy of Science and Arts, Belgrade (RS)

    2010-06-15

    A considerable number of authors presented experiments in which degradation of large scale circulation occurred in regional climate integrations when large-scale nudging was not used (e.g., von Storch et al., 2000; Biner et al., 2000; Rockel et al., 2008; Sanchez-Gomez et al., 2008; Alexandru et al., 2009; among others). We here show an earlier 9-member ensemble result of the June-August precipitation difference over the contiguous United States between the ''flood year'' of 1993 and the ''drought year'' of 1988, in which the Eta model nested in the COLA AGCM gave a rather accurate depiction of the analyzed difference, even though the driver AGCM failed in doing so to the extent of having a minimum in the area where the maximum ought to be. It is suggested that this could hardly have been possible without an RCM's improvement in the large scales of the driver AGCM. We further revisit the issue by comparing the large scale skill of the Eta RCM against that of a global ECMWF 32-day ensemble forecast used as its driver. Another issue we are looking into is that of the lateral boundary condition (LBC) scheme. The question we ask is whether the almost universally used but somewhat costly relaxation scheme is necessary for a desirable RCM performance? We address this by running the Eta in two versions differing in the lateral boundary scheme used. One of these is the traditional relaxation scheme and the other is the Eta model scheme in which information is used at the outermost boundary only and not all variables are prescribed at the outflow boundary. The skills of these two sets of RCM forecasts are compared against each other and also against that of their driver. A novelty in our experiments is the verification used. In order to test the large scale skill we are looking at the forecast position accuracy of the strongest winds at the jet stream level, which we have taken as 250 hPa. We do this by calculating bias adjusted

  1. Large Scale Frequent Pattern Mining using MPI One-Sided Model

    Energy Technology Data Exchange (ETDEWEB)

    Vishnu, Abhinav; Agarwal, Khushbu

    2015-09-08

    In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. An experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.

  2. Modeling of a Large-Scale High Temperature Regenerative Sulfur Removal Process

    DEFF Research Database (Denmark)

    Konttinen, Jukka T.; Johnsson, Jan Erik

    1999-01-01

    model that does not account for bed hydrodynamics. The pilot-scale test run results, obtained in the test runs of the sulfur removal process with real coal gasifier gas, have been used for parameter estimation. The validity of the reactor model for commercial-scale design applications is discussed.......Regenerable mixed metal oxide sorbents are prime candidates for the removal of hydrogen sulfide from hot gasifier gas in the simplified integrated gasification combined cycle (IGCC) process. As part of the regenerative sulfur removal process development, reactor models are needed for scale......-up. Steady-state kinetic reactor models are needed for reactor sizing, and dynamic models can be used for process control design and operator training. The regenerative sulfur removal process to be studied in this paper consists of two side-by-side fluidized bed reactors operating at temperatures of 400...

  3. The Large-Scale Structure of Scientific Method

    Science.gov (United States)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  4. No Large Scale Curvature Perturbations during Waterfall of Hybrid Inflation

    OpenAIRE

    Abolhasani, Ali Akbar; Firouzjahi, Hassan

    2010-01-01

    In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depend crucially on the competition between the classical and the quantum mechanical back-reactions to terminate inflation. If one considers only the clas...

  5. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  6. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  7. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  8. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom

    2017-11-22

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  9. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom; Femiani, John; Wonka, Peter; Mitra, Niloy J.

    2017-01-01

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  10. Repurposing of open data through large scale hydrological modelling - hypeweb.smhi.se

    Science.gov (United States)

    Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit

    2015-04-01

    Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). This presentation will give an example of how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. The model output is launched as new Open Data at the web site www.hypeweb.smhi.se to be used for (i) Climate change impact assessments on water resources and dynamics; (ii) The European Water Framework Directive (WFD) for characterization and development of measure programs to improve the ecological status of water bodies; (iii) Design variables for infrastructure constructions; (iv) Spatial water-resource mapping; (v) Operational forecasts (1-10 days and seasonal) on floods and droughts; (vi) Input to oceanographic models for operational forecasts and marine status assessments; (vii) Research. The following regional domains have been modelled so far with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The Hype web site provides several interactive web applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model

  11. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    Science.gov (United States)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  12. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  13. Comparison of void strengthening in fcc and bcc metals: Large-scale atomic-level modelling

    International Nuclear Information System (INIS)

    Osetsky, Yu.N.; Bacon, D.J.

    2005-01-01

    Strengthening due to voids can be a significant radiation effect in metals. Treatment of this by elasticity theory of dislocations is difficult when atomic structure of the obstacle and dislocation is influential. In this paper, we report results of large-scale atomic-level modelling of edge dislocation-void interaction in fcc (copper) and bcc (iron) metals. Voids of up to 5 nm diameter were studied over the temperature range from 0 to 600 K. We demonstrate that atomistic modelling is able to reveal important effects, which are beyond the continuum approach. Some arise from features of the dislocation core and crystal structure, others involve dislocation climb and temperature effects

  14. Cross-scale intercomparison of climate change impacts simulated by regional and global hydrological models in eleven large river basins

    Energy Technology Data Exchange (ETDEWEB)

    Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Flörke, M.; Huang, S.; Motovilov, Y.; Buda, S.; Yang, T.; Müller, C.; Leng, G.; Tang, Q.; Portmann, F. T.; Hagemann, S.; Gerten, D.; Wada, Y.; Masaki, Y.; Alemayehu, T.; Satoh, Y.; Samaniego, L.

    2017-01-04

    Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.

  15. The Effect of Large Scale Salinity Gradient on Langmuir Turbulence

    Science.gov (United States)

    Fan, Y.; Jarosz, E.; Yu, Z.; Jensen, T.; Sullivan, P. P.; Liang, J.

    2017-12-01

    Langmuir circulation (LC) is believed to be one of the leading order causes of turbulent mixing in the upper ocean. It is important for momentum and heat exchange across the mixed layer (ML) and directly impact the dynamics and thermodynamics in the upper ocean and lower atmosphere including the vertical distributions of chemical, biological, optical, and acoustic properties. Based on Craik and Leibovich (1976) theory, large eddy simulation (LES) models have been developed to simulate LC in the upper ocean, yielding new insights that could not be obtained from field observations and turbulent closure models. Due its high computational cost, LES models are usually limited to small domain sizes and cannot resolve large-scale flows. Furthermore, most LES models used in the LC simulations use periodic boundary conditions in the horizontal direction, which assumes the physical properties (i.e. temperature and salinity) and expected flow patterns in the area of interest are of a periodically repeating nature so that the limited small LES domain is representative for the larger area. Using periodic boundary condition can significantly reduce computational effort in problems, and it is a good assumption for isotropic shear turbulence. However, LC is anisotropic (McWilliams et al 1997) and was observed to be modulated by crosswind tidal currents (Kukulka et al 2011). Using symmetrical domains, idealized LES studies also indicate LC could interact with oceanic fronts (Hamlington et al 2014) and standing internal waves (Chini and Leibovich, 2005). The present study expands our previous LES modeling investigations of Langmuir turbulence to the real ocean conditions with large scale environmental motion that features fresh water inflow into the study region. Large scale gradient forcing is introduced to the NCAR LES model through scale separation analysis. The model is applied to a field observation in the Gulf of Mexico in July, 2016 when the measurement site was impacted by

  16. Large-scale structure in the universe: Theory vs observations

    International Nuclear Information System (INIS)

    Kashlinsky, A.; Jones, B.J.T.

    1990-01-01

    A variety of observations constrain models of the origin of large scale cosmic structures. We review here the elements of current theories and comment in detail on which of the current observational data provide the principal constraints. We point out that enough observational data have accumulated to constrain (and perhaps determine) the power spectrum of primordial density fluctuations over a very large range of scales. We discuss the theories in the light of observational data and focus on the potential of future observations in providing even (and ever) tighter constraints. (orig.)

  17. A DATA-DRIVEN ANALYTIC MODEL FOR PROTON ACCELERATION BY LARGE-SCALE SOLAR CORONAL SHOCKS

    Energy Technology Data Exchange (ETDEWEB)

    Kozarev, Kamen A. [Smithsonian Astrophysical Observatory (United States); Schwadron, Nathan A. [Institute for the Study of Earth, Oceans, and Space, University of New Hampshire (United States)

    2016-11-10

    We have recently studied the development of an eruptive filament-driven, large-scale off-limb coronal bright front (OCBF) in the low solar corona, using remote observations from the Solar Dynamics Observatory ’s Advanced Imaging Assembly EUV telescopes. In that study, we obtained high-temporal resolution estimates of the OCBF parameters regulating the efficiency of charged particle acceleration within the theoretical framework of diffusive shock acceleration (DSA). These parameters include the time-dependent front size, speed, and strength, as well as the upstream coronal magnetic field orientations with respect to the front’s surface normal direction. Here we present an analytical particle acceleration model, specifically developed to incorporate the coronal shock/compressive front properties described above, derived from remote observations. We verify the model’s performance through a grid of idealized case runs using input parameters typical for large-scale coronal shocks, and demonstrate that the results approach the expected DSA steady-state behavior. We then apply the model to the event of 2011 May 11 using the OCBF time-dependent parameters derived by Kozarev et al. We find that the compressive front likely produced energetic particles as low as 1.3 solar radii in the corona. Comparing the modeled and observed fluences near Earth, we also find that the bulk of the acceleration during this event must have occurred above 1.5 solar radii. With this study we have taken a first step in using direct observations of shocks and compressions in the innermost corona to predict the onsets and intensities of solar energetic particle events.

  18. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [The University of Texas at Austin

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  19. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  20. Assessing Effects of Joining Common Currency Area with Large-Scale DSGE model: A Case of Poland

    OpenAIRE

    Maciej Bukowski; Sebastian Dyrda; Pawe³ Kowal

    2008-01-01

    In this paper we present a large scale dynamic stochastic general equilibrium model, in order to analyze and simulate effects of Euro introduction in Poland. Presented framework is a based on a two-country open economy model, where foreign acts as the Eurozone, and home as a candidate country. We have implemented various types of structural frictions in the open economy block, that generate empirically observable deviations from purchasing power parity rule. We consider such mechanisms as a d...

  1. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  2. Event-triggered decentralized robust model predictive control for constrained large-scale interconnected systems

    Directory of Open Access Journals (Sweden)

    Ling Lu

    2016-12-01

    Full Text Available This paper considers the problem of event-triggered decentralized model predictive control (MPC for constrained large-scale linear systems subject to additive bounded disturbances. The constraint tightening method is utilized to formulate the MPC optimization problem. The local predictive control law for each subsystem is determined aperiodically by relevant triggering rule which allows a considerable reduction of the computational load. And then, the robust feasibility and closed-loop stability are proved and it is shown that every subsystem state will be driven into a robust invariant set. Finally, the effectiveness of the proposed approach is illustrated via numerical simulations.

  3. Large-Scale Flows and Magnetic Fields Produced by Rotating Convection in a Quasi-Geostrophic Model of Planetary Cores

    Science.gov (United States)

    Guervilly, C.; Cardin, P.

    2017-12-01

    Convection is the main heat transport process in the liquid cores of planets. The convective flows are thought to be turbulent and constrained by rotation (corresponding to high Reynolds numbers Re and low Rossby numbers Ro). Under these conditions, and in the absence of magnetic fields, the convective flows can produce coherent Reynolds stresses that drive persistent large-scale zonal flows. The formation of large-scale flows has crucial implications for the thermal evolution of planets and the generation of large-scale magnetic fields. In this work, we explore this problem with numerical simulations using a quasi-geostrophic approximation to model convective and zonal flows at Re 104 and Ro 10-4 for Prandtl numbers relevant for liquid metals (Pr 0.1). The formation of intense multiple zonal jets strongly affects the convective heat transport, leading to the formation of a mean temperature staircase. We also study the generation of magnetic fields by the quasi-geostrophic flows at low magnetic Prandtl numbers.

  4. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  5. The Nature of Global Large-scale Sea Level Variability in Relation to Atmospheric Forcing: A Modeling Study

    Science.gov (United States)

    Fukumori, I.; Raghunath, R.; Fu, L. L.

    1996-01-01

    The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.

  6. Coupling a basin erosion and river sediment transport model into a large scale hydrological model: an application in the Amazon basin

    Science.gov (United States)

    Buarque, D. C.; Collischonn, W.; Paiva, R. C. D.

    2012-04-01

    This study presents the first application and preliminary results of the large scale hydrodynamic/hydrological model MGB-IPH with a new module to predict the spatial distribution of the basin erosion and river sediment transport in a daily time step. The MGB-IPH is a large-scale, distributed and process based hydrological model that uses a catchment based discretization and the Hydrological Response Units (HRU) approach. It uses physical based equations to simulate the hydrological processes, such as the Penman Monteith model for evapotranspiration, and uses the Muskingum Cunge approach and a full 1D hydrodynamic model for river routing; including backwater effects and seasonal flooding. The sediment module of the MGB-IPH model is divided into two components: 1) prediction of erosion over the basin and sediment yield to river network; 2) sediment transport along the river channels. Both MGB-IPH and the sediment module use GIS tools to display relevant maps and to extract parameters from SRTM DEM (a 15" resolution was adopted). Using the catchment discretization the sediment module applies the Modified Universal Soil Loss Equation to predict soil loss from each HRU considering three sediment classes defined according to the soil texture: sand, silt and clay. The effects of topography on soil erosion are estimated by a two-dimensional slope length (LS) factor which using the contributing area approach and a local slope steepness (S), both estimated for each DEM pixel using GIS algorithms. The amount of sediment releasing to the catchment river reach in each day is calculated using a linear reservoir. Once the sediment reaches the river they are transported into the river channel using an advection equation for silt and clay and a sediment continuity equation for sand. A sediment balance based on the Yang sediment transport capacity, allowing to compute the amount of erosion and deposition along the rivers, is performed for sand particles as bed load, whilst no

  7. Statistical Modeling of Large-Scale Signal Path Loss in Underwater Acoustic Networks

    Directory of Open Access Journals (Sweden)

    Manuel Perez Malumbres

    2013-02-01

    Full Text Available In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation, we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc., an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc..

  8. Approximate symmetries in atomic nuclei from a large-scale shell-model perspective

    Science.gov (United States)

    Launey, K. D.; Draayer, J. P.; Dytrych, T.; Sun, G.-H.; Dong, S.-H.

    2015-05-01

    In this paper, we review recent developments that aim to achieve further understanding of the structure of atomic nuclei, by capitalizing on exact symmetries as well as approximate symmetries found to dominate low-lying nuclear states. The findings confirm the essential role played by the Sp(3, ℝ) symplectic symmetry to inform the interaction and the relevant model spaces in nuclear modeling. The significance of the Sp(3, ℝ) symmetry for a description of a quantum system of strongly interacting particles naturally emerges from the physical relevance of its generators, which directly relate to particle momentum and position coordinates, and represent important observables, such as, the many-particle kinetic energy, the monopole operator, the quadrupole moment and the angular momentum. We show that it is imperative that shell-model spaces be expanded well beyond the current limits to accommodate particle excitations that appear critical to enhanced collectivity in heavier systems and to highly-deformed spatial structures, exemplified by the second 0+ state in 12C (the challenging Hoyle state) and 8Be. While such states are presently inaccessible by large-scale no-core shell models, symmetry-based considerations are found to be essential.

  9. Application of Large-Scale, Multi-Resolution Watershed Modeling Framework Using the Hydrologic and Water Quality System (HAWQS

    Directory of Open Access Journals (Sweden)

    Haw Yen

    2016-04-01

    Full Text Available In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources allocation, sediment transport, and pollution control. Among commonly adopted models, the Soil and Water Assessment Tool (SWAT has been demonstrated to provide superior performance with a large amount of referencing databases. However, it is cumbersome to perform tedious initialization steps such as preparing inputs and developing a model with each changing targeted study area. In this study, the Hydrologic and Water Quality System (HAWQS is introduced to serve as a national-scale Decision Support System (DSS to conduct challenging watershed modeling tasks. HAWQS is a web-based DSS developed and maintained by Texas A & M University, and supported by the U.S. Environmental Protection Agency. Three different spatial resolutions of Hydrologic Unit Code (HUC8, HUC10, and HUC12 and three temporal scales (time steps in daily/monthly/annual are available as alternatives for general users. In addition, users can specify preferred values of model parameters instead of using the pre-defined sets. With the aid of HAWQS, users can generate a preliminarily calibrated SWAT project within a few minutes by only providing the ending HUC number of the targeted watershed and the simulation period. In the case study, HAWQS was implemented on the Illinois River Basin, USA, with graphical demonstrations and associated analytical results. Scientists and/or decision-makers can take advantage of the HAWQS framework while conducting relevant topics or policies in the future.

  10. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan

    2011-10-10

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  11. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan; Huang, Jianhua Z.

    2011-01-01

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  12. Large-scale motions in the universe: a review

    International Nuclear Information System (INIS)

    Burstein, D.

    1990-01-01

    The expansion of the universe can be retarded in localised regions within the universe both by the presence of gravity and by non-gravitational motions generated in the post-recombination universe. The motions of galaxies thus generated are called 'peculiar motions', and the amplitudes, size scales and coherence of these peculiar motions are among the most direct records of the structure of the universe. As such, measurements of these properties of the present-day universe provide some of the severest tests of cosmological theories. This is a review of the current evidence for large-scale motions of galaxies out to a distance of ∼5000 km s -1 (in an expanding universe, distance is proportional to radial velocity). 'Large-scale' in this context refers to motions that are correlated over size scales larger than the typical sizes of groups of galaxies, up to and including the size of the volume surveyed. To orient the reader into this relatively new field of study, a short modern history is given together with an explanation of the terminology. Careful consideration is given to the data used to measure the distances, and hence the peculiar motions, of galaxies. The evidence for large-scale motions is presented in a graphical fashion, using only the most reliable data for galaxies spanning a wide range in optical properties and over the complete range of galactic environments. The kinds of systematic errors that can affect this analysis are discussed, and the reliability of these motions is assessed. The predictions of two models of large-scale motion are compared to the observations, and special emphasis is placed on those motions in which our own Galaxy directly partakes. (author)

  13. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  14. Cold dark matter confronts the cosmic microwave background - Large-angular-scale anisotropies in Omega sub 0 + lambda 1 models

    Science.gov (United States)

    Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola

    1992-01-01

    A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.

  15. Identifiability in N-mixture models: a large-scale screening test with bird data.

    Science.gov (United States)

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  16. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  17. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  18. One-scale supersymmetric inflationary models

    International Nuclear Information System (INIS)

    Bertolami, O.; Ross, G.G.

    1986-01-01

    The reheating phase is studied in a class of supergravity inflationary models involving a two-component hidden sector in which the scale of supersymmetry breaking and the scale generating inflation are related. It is shown that these models have an ''entropy crisis'' in which there is a large entropy release after nucleosynthesis leading to unacceptable low nuclear abundances. (orig.)

  19. LARGE SCALE DISTRIBUTED PARAMETER MODEL OF MAIN MAGNET SYSTEM AND FREQUENCY DECOMPOSITION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    ZHANG,W.; MARNERIS, I.; SANDBERG, J.

    2007-06-25

    Large accelerator main magnet system consists of hundreds, even thousands, of dipole magnets. They are linked together under selected configurations to provide highly uniform dipole fields when powered. Distributed capacitance, insulation resistance, coil resistance, magnet inductance, and coupling inductance of upper and lower pancakes make each magnet a complex network. When all dipole magnets are chained together in a circle, they become a coupled pair of very high order complex ladder networks. In this study, a network of more than thousand inductive, capacitive or resistive elements are used to model an actual system. The circuit is a large-scale network. Its equivalent polynomial form has several hundred degrees. Analysis of this high order circuit and simulation of the response of any or all components is often computationally infeasible. We present methods to use frequency decomposition approach to effectively simulate and analyze magnet configuration and power supply topologies.

  20. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  1. Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.

    Science.gov (United States)

    Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van

    2017-06-01

    In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.

  2. Real-world-time simulation of memory consolidation in a large-scale cerebellar model

    Directory of Open Access Journals (Sweden)

    Masato eGosui

    2016-03-01

    Full Text Available We report development of a large-scale spiking network model of thecerebellum composed of more than 1 million neurons. The model isimplemented on graphics processing units (GPUs, which are dedicatedhardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation ofcerebellar activity for 1 sec completes within 1 sec in thereal-world time, with temporal resolution of 1 msec.This allows us to carry out a very long-term computer simulationof cerebellar activity in a practical time with millisecond temporalresolution. Using the model, we carry out computer simulationof long-term gain adaptation of optokinetic response (OKR eye movementsfor 5 days aimed to study the neural mechanisms of posttraining memoryconsolidation. The simulation results are consistent with animal experimentsand our theory of posttraining memory consolidation. These resultssuggest that realtime computing provides a useful means to studya very slow neural process such as memory consolidation in the brain.

  3. Large scale nuclear structure studies

    International Nuclear Information System (INIS)

    Faessler, A.

    1985-01-01

    Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)

  4. Modeling large-scale human alteration of land surface hydrology and climate

    Science.gov (United States)

    Pokhrel, Yadu N.; Felfelani, Farshid; Shin, Sanghoon; Yamada, Tomohito J.; Satoh, Yusuke

    2017-12-01

    Rapidly expanding human activities have profoundly affected various biophysical and biogeochemical processes of the Earth system over a broad range of scales, and freshwater systems are now amongst the most extensively altered ecosystems. In this study, we examine the human-induced changes in land surface water and energy balances and the associated climate impacts using a coupled hydrological-climate model framework which also simulates the impacts of human activities on the water cycle. We present three sets of analyses using the results from two model versions—one with and the other without considering human activities; both versions are run in offline and coupled mode resulting in a series of four experiments in total. First, we examine climate and human-induced changes in regional water balance focusing on the widely debated issue of the desiccation of the Aral Sea in central Asia. Then, we discuss the changes in surface temperature as a result of changes in land surface energy balance due to irrigation over global and regional scales. Finally, we examine the global and regional climate impacts of increased atmospheric water vapor content due to irrigation. Results indicate that the direct anthropogenic alteration of river flow in the Aral Sea basin resulted in the loss of 510 km3 of water during the latter half of the twentieth century which explains about half of the total loss of water from the sea. Results of irrigation-induced changes in surface energy balance suggest a significant surface cooling of up to 3.3 K over 1° grids in highly irrigated areas but a negligible change in land surface temperature when averaged over sufficiently large global regions. Results from the coupled model indicate a substantial change in 2 m air temperature and outgoing longwave radiation due to irrigation, highlighting the non-local (regional and global) implications of irrigation. These results provide important insights on the direct human alteration of land surface

  5. A Statistical Model for Hourly Large-Scale Wind and Photovoltaic Generation in New Locations

    DEFF Research Database (Denmark)

    Ekstrom, Jussi; Koivisto, Matti Juhani; Mellin, Ilkka

    2017-01-01

    The analysis of large-scale wind and photovoltaic (PV) energy generation is of vital importance in power systems where their penetration is high. This paper presents a modular methodology to assess the power generation and volatility of a system consisting of both PV plants (PVPs) and wind power...... of new PVPs and WPPs in system planning. The model is verified against hourly measured wind speed and solar irradiance data from Finland. A case study assessing the impact of the geographical distribution of the PVPs and WPPs on aggregate power generation and its variability is presented....

  6. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    Science.gov (United States)

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  7. Novel algorithm of large-scale simultaneous linear equations

    International Nuclear Information System (INIS)

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-01-01

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  8. What causes the differences between national estimates of carbon emissions from forest management and large-scale models?

    NARCIS (Netherlands)

    Groen, T.A.; Verkerk, P.J.; Böttcher, H.; Grassi, G.; Cienciala, E.; Black, K.G.; Fortin, M.J.; Koethke, M.; Lethonen, A.; Nabuurs, G.J.; Petrova, L.; Blujdea, V.

    2013-01-01

    Under the United Nations Framework Convention for Climate Change all Parties have to report on carbon emissions and removals from the forestry sector. Each Party can use its own approach and country specific data for this. Independently, large-scale models exist (e.g. EFISCEN and G4M as used in this

  9. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  10. Large-scale hydrological modelling and decision-making for sustainable water and land management along the Tarim River

    OpenAIRE

    Yu, Yang

    2017-01-01

    The debate over the effectiveness of Integrated Water Resources Management (IWRM) in practice has lasted for years. As the complexity and scope of IWRM increases in practice, it is difficult for hydrological models to directly simulate the interactions among water, ecosystem and humans. This study presents the large-scale hydrological modeling (MIKE HYDRO) approach and a Decision Support System (DSS) for decision-making with stakeholders on the sustainable water and land management along the ...

  11. Local-scale high-resolution atmospheric dispersion model using large-eddy simulation. LOHDIM-LES

    International Nuclear Information System (INIS)

    Nakayama, Hiromasa; Nagai, Haruyasu

    2016-03-01

    We developed LOcal-scale High-resolution atmospheric DIspersion Model using Large-Eddy Simulation (LOHDIM-LES). This dispersion model is designed based on LES which is effective to reproduce unsteady behaviors of turbulent flows and plume dispersion. The basic equations are the continuity equation, the Navier-Stokes equation, and the scalar conservation equation. Buildings and local terrain variability are resolved by high-resolution grids with a few meters and these turbulent effects are represented by immersed boundary method. In simulating atmospheric turbulence, boundary layer flows are generated by a recycling turbulent inflow technique in a driver region set up at the upstream of the main analysis region. This turbulent inflow data are imposed at the inlet of the main analysis region. By this approach, the LOHDIM-LES can provide detailed information on wind velocities and plume concentration in the investigated area. (author)

  12. Multi-scale Modeling of Arctic Clouds

    Science.gov (United States)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  13. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    Science.gov (United States)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2011-08-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the

  14. Large scale hydro-economic modelling for policy support

    Science.gov (United States)

    de Roo, Ad; Burek, Peter; Bouraoui, Faycal; Reynaud, Arnaud; Udias, Angel; Pistocchi, Alberto; Lanzanova, Denis; Trichakis, Ioannis; Beck, Hylke; Bernhard, Jeroen

    2014-05-01

    To support European Union water policy making and policy monitoring, a hydro-economic modelling environment has been developed to assess optimum combinations of water retention measures, water savings measures, and nutrient reduction measures for continental Europe. This modelling environment consists of linking the agricultural CAPRI model, the LUMP land use model, the LISFLOOD water quantity model, the EPIC water quality model, the LISQUAL combined water quantity, quality and hydro-economic model, and a multi-criteria optimisation routine. With this modelling environment, river basin scale simulations are carried out to assess the effects of water-retention measures, water-saving measures, and nutrient-reduction measures on several hydro-chemical indicators, such as the Water Exploitation Index (WEI), Nitrate and Phosphate concentrations in rivers, the 50-year return period river discharge as an indicator for flooding, and economic losses due to water scarcity for the agricultural sector, the manufacturing-industry sector, the energy-production sector and the domestic sector, as well as the economic loss due to flood damage. Recently, this model environment is being extended with a groundwater model to evaluate the effects of measures on the average groundwater table and available resources. Also, water allocation rules are addressed, while having environmental flow included as a minimum requirement for the environment. Economic functions are currently being updated as well. Recent development and examples will be shown and discussed, as well as open challenges.

  15. A Poisson regression approach to model monthly hail occurrence in Northern Switzerland using large-scale environmental variables

    Science.gov (United States)

    Madonna, Erica; Ginsbourger, David; Martius, Olivia

    2018-05-01

    In Switzerland, hail regularly causes substantial damage to agriculture, cars and infrastructure, however, little is known about its long-term variability. To study the variability, the monthly number of days with hail in northern Switzerland is modeled in a regression framework using large-scale predictors derived from ERA-Interim reanalysis. The model is developed and verified using radar-based hail observations for the extended summer season (April-September) in the period 2002-2014. The seasonality of hail is explicitly modeled with a categorical predictor (month) and monthly anomalies of several large-scale predictors are used to capture the year-to-year variability. Several regression models are applied and their performance tested with respect to standard scores and cross-validation. The chosen model includes four predictors: the monthly anomaly of the two meter temperature, the monthly anomaly of the logarithm of the convective available potential energy (CAPE), the monthly anomaly of the wind shear and the month. This model well captures the intra-annual variability and slightly underestimates its inter-annual variability. The regression model is applied to the reanalysis data back in time to 1980. The resulting hail day time series shows an increase of the number of hail days per month, which is (in the model) related to an increase in temperature and CAPE. The trend corresponds to approximately 0.5 days per month per decade. The results of the regression model have been compared to two independent data sets. All data sets agree on the sign of the trend, but the trend is weaker in the other data sets.

  16. Large scale inhomogeneities and the cosmological principle

    International Nuclear Information System (INIS)

    Lukacs, B.; Meszaros, A.

    1984-12-01

    The compatibility of cosmologic principles and possible large scale inhomogeneities of the Universe is discussed. It seems that the strongest symmetry principle which is still compatible with reasonable inhomogeneities, is a full conformal symmetry in the 3-space defined by the cosmological velocity field, but even in such a case, the standard model is isolated from the inhomogeneous ones when the whole evolution is considered. (author)

  17. Just enough inflation. Power spectrum modifications at large scales

    International Nuclear Information System (INIS)

    Cicoli, Michele; Downes, Sean

    2014-07-01

    We show that models of 'just enough' inflation, where the slow-roll evolution lasted only 50-60 e-foldings, feature modifications of the CMB power spectrum at large angular scales. We perform a systematic and model-independent analysis of any possible non-slow-roll background evolution prior to the final stage of slow-roll inflation. We find a high degree of universality since most common backgrounds like fast-roll evolution, matter or radiation-dominance give rise to a power loss at large angular scales and a peak together with an oscillatory behaviour at scales around the value of the Hubble parameter at the beginning of slow-roll inflation. Depending on the value of the equation of state parameter, different pre-inflationary epochs lead instead to an enhancement of power at low-l, and so seem disfavoured by recent observational hints for a lack of CMB power at l< or similar 40. We also comment on the importance of initial conditions and the possibility to have multiple pre-inflationary stages.

  18. Model design for Large-Scale Seismic Test Program at Hualien, Taiwan

    International Nuclear Information System (INIS)

    Tang, H.T.; Graves, H.L.; Chen, P.C.

    1991-01-01

    The Large-Scale Seismic Test (LSST) Program at Hualien, Taiwan, is a follow-on to the soil-structure interaction (SSI) experiments at Lotung, Taiwan. The planned SSI studies will be performed at a stiff soil site in Hualien, Taiwan, that historically has had slightly more destructive earthquakes in the past than Lotung. The LSST is a joint effort among many interested parties. Electric Power Research Institute (EPRI) and Taipower are the organizers of the program and have the lead in planning and managing the program. Other organizations participating in the LSST program are US Nuclear Regulatory Commission (NRC), the Central Research Institute of Electric Power Industry (CRIEPI), the Tokyo Electric Power Company (TEPCO), the Commissariat A L'Energie Atomique (CEA), Electricite de France (EdF) and Framatome. The LSST was initiated in January 1990, and is envisioned to be five years in duration. Based on the assumption of stiff soil and confirmed by soil boring and geophysical results the test model was designed to provide data needed for SSI studies covering: free-field input, nonlinear soil response, non-rigid body SSI, torsional response, kinematic interaction, spatial incoherency and other effects. Taipower had the lead in design of the test model and received significant input from other LSST members. Questions raised by LSST members were on embedment effects, model stiffness, base shear, and openings for equipment. This paper describes progress in site preparation, design and construction of the model and development of an instrumentation plan

  19. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  20. Large scale structures in the kinetic gravity braiding model that can be unbraided

    International Nuclear Information System (INIS)

    Kimura, Rampei; Yamamoto, Kazuhiro

    2011-01-01

    We study cosmological consequences of a kinetic gravity braiding model, which is proposed as an alternative to the dark energy model. The kinetic braiding model we study is characterized by a parameter n, which corresponds to the original galileon cosmological model for n = 1. We find that the background expansion of the universe of the kinetic braiding model is the same as the Dvali-Turner's model, which reduces to that of the standard cold dark matter model with a cosmological constant (ΛCDM model) for n equal to infinity. We also find that the evolution of the linear cosmological perturbation in the kinetic braiding model reduces to that of the ΛCDM model for n = ∞. Then, we focus our study on the growth history of the linear density perturbation as well as the spherical collapse in the nonlinear regime of the density perturbations, which might be important in order to distinguish between the kinetic braiding model and the ΛCDM model when n is finite. The theoretical prediction for the large scale structure is confronted with the multipole power spectrum of the luminous red galaxy sample of the Sloan Digital Sky survey. We also discuss future prospects of constraining the kinetic braiding model using a future redshift survey like the WFMOS/SuMIRe PFS survey as well as the cluster redshift distribution in the South Pole Telescope survey

  1. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  2. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  3. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  4. Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR

    Science.gov (United States)

    Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.

    2017-12-01

    Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.

  5. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  6. Generation of large-scale PV scenarios using aggregated power curves

    DEFF Research Database (Denmark)

    Nuño Martinez, Edgar; Cutululis, Nicolaos Antonio

    2017-01-01

    The contribution of solar photovoltaic (PV) power to the generation is becoming more relevant in modern power system. Therefore, there is a need to model the variability large-scale PV generation accurately. This paper presents a novel methodology to generate regional PV scenarios based...... on aggregated power curves rather than traditional physical PV conversion models. Our approach is based on hourly mesoscale reanalysis irradiation data and power measurements and do not require additional variables such as ambient temperature or wind speed. It was used to simulate the PV generation...... on the German system between 2012 and 2015 showing high levels of correlation with actual measurements (93.02–97.60%) and small deviations from the expected capacity factors (0.02–1.80%). Therefore, we are confident about the ability of the proposed model to accurately generate realistic large-scale PV...

  7. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  8. Optimizing the design of large-scale ground-coupled heat pump systems using groundwater and heat transport modeling

    Energy Technology Data Exchange (ETDEWEB)

    Fujii, H.; Itoi, R.; Fujii, J. [Kyushu University, Fukuoka (Japan). Faculty of Engineering, Department of Earth Resources Engineering; Uchida, Y. [Geological Survey of Japan, Tsukuba (Japan)

    2005-06-01

    In order to predict the long-term performance of large-scale ground-coupled heat pump (GCHP) systems, it is necessary to take into consideration well-to-well interference, especially in the presence of groundwater flow. A mass and heat transport model was developed to simulate the behavior of this type of system in the Akita Plain, northern Japan. The model was used to investigate different operational schemes and to maximize the heat extraction rate from the GCHP system. (author)

  9. No large scale curvature perturbations during the waterfall phase transition of hybrid inflation

    International Nuclear Information System (INIS)

    Abolhasani, Ali Akbar; Firouzjahi, Hassan

    2011-01-01

    In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of the standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depends crucially on the competition between the classical and the quantum mechanical backreactions to terminate inflation. If one considers only the classical evolution of the system, we show that the highly blue-tilted entropy perturbations induce highly blue-tilted large scale curvature perturbations during the waterfall phase transition which dominate over the original adiabatic curvature perturbations. However, we show that the quantum backreactions of the waterfall field inhomogeneities produced during the phase transition dominate completely over the classical backreactions. The cumulative quantum backreactions of very small scale tachyonic modes terminate inflation very efficiently and shut off the curvature perturbation evolution during the waterfall phase transition. This indicates that the standard hybrid inflation model is safe under large scale curvature perturbations during the waterfall phase transition.

  10. Lichen elemental content bioindicators for air quality in upper Midwest, USA: A model for large-scale monitoring

    Science.gov (United States)

    Susan Will-Wolf; Sarah Jovan; Michael C. Amacher

    2017-01-01

    Our development of lichen elemental bioindicators for a United States of America (USA) national monitoring program is a useful model for other large-scale programs. Concentrations of 20 elements were measured, validated, and analyzed for 203 samples of five common lichen species. Collections were made by trained non-specialists near 75 permanent plots and an expert...

  11. Development of lichen response indexes using a regional gradient modeling approach for large-scale monitoring of forests

    Science.gov (United States)

    Susan Will-Wolf; Peter Neitlich

    2010-01-01

    Development of a regional lichen gradient model from community data is a powerful tool to derive lichen indexes of response to environmental factors for large-scale and long-term monitoring of forest ecosystems. The Forest Inventory and Analysis (FIA) Program of the U.S. Department of Agriculture Forest Service includes lichens in its national inventory of forests of...

  12. Aerodynamic loads calculation and analysis for large scale wind turbine based on combining BEM modified theory with dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Dai, J.C. [College of Mechanical and Electrical Engineering, Central South University, Changsha (China); School of Electromechanical Engineering, Hunan University of Science and Technology, Xiangtan (China); Hu, Y.P.; Liu, D.S. [School of Electromechanical Engineering, Hunan University of Science and Technology, Xiangtan (China); Long, X. [Hara XEMC Windpower Co., Ltd., Xiangtan (China)

    2011-03-15

    The aerodynamic loads for MW scale horizontal-axis wind turbines are calculated and analyzed in the established coordinate systems which are used to describe the wind turbine. In this paper, the blade element momentum (BEM) theory is employed and some corrections, such as Prandtl and Buhl models, are carried out. Based on the B-L semi-empirical dynamic stall (DS) model, a new modified DS model for NACA63-4xx airfoil is adopted. Then, by combing BEM modified theory with DS model, a set of calculation method of aerodynamic loads for large scale wind turbines is proposed, in which some influence factors such as wind shear, tower, tower and blade vibration are considered. The research results show that the presented dynamic stall model is good enough for engineering purpose; the aerodynamic loads are influenced by many factors such as tower shadow, wind shear, dynamic stall, tower and blade vibration, etc, with different degree; the single blade endures periodical changing loads but the variations of the rotor shaft power caused by the total aerodynamic torque in edgewise direction are very small. The presented study approach of aerodynamic loads calculation and analysis is of the university, and helpful for thorough research of loads reduction on large scale wind turbines. (author)

  13. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  14. Applications of Data Assimilation to Analysis of the Ocean on Large Scales

    Science.gov (United States)

    Miller, Robert N.; Busalacchi, Antonio J.; Hackert, Eric C.

    1997-01-01

    It is commonplace to begin talks on this topic by noting that oceanographic data are too scarce and sparse to provide complete initial and boundary conditions for large-scale ocean models. Even considering the availability of remotely-sensed data such as radar altimetry from the TOPEX and ERS-1 satellites, a glance at a map of available subsurface data should convince most observers that this is still the case. Data are still too sparse for comprehensive treatment of interannual to interdecadal climate change through the use of models, since the new data sets have not been around for very long. In view of the dearth of data, we must note that the overall picture is changing rapidly. Recently, there have been a number of large scale ocean analysis and prediction efforts, some of which now run on an operational or at least quasi-operational basis, most notably the model based analyses of the tropical oceans. These programs are modeled on numerical weather prediction. Aside from the success of the global tide models, assimilation of data in the tropics, in support of prediction and analysis of seasonal to interannual climate change, is probably the area of large scale ocean modeling and data assimilation in which the most progress has been made. Climate change is a problem which is particularly suited to advanced data assimilation methods. Linear models are useful, and the linear theory can be exploited. For the most part, the data are sufficiently sparse that implementation of advanced methods is worthwhile. As an example of a large scale data assimilation experiment with a recent extensive data set, we present results of a tropical ocean experiment in which the Kalman filter was used to assimilate three years of altimetric data from Geosat into a coarsely resolved linearized long wave shallow water model. Since nonlinear processes dominate the local dynamic signal outside the tropics, subsurface dynamical quantities cannot be reliably inferred from surface height

  15. Nature of global large-scale sea level variability in relation to atmospheric forcing: A modeling study

    Science.gov (United States)

    Fukumori, Ichiro; Raghunath, Ramanujam; Fu, Lee-Lueng

    1998-03-01

    The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equation model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to January 1994. The physical nature of sea level's temporal variability from periods of days to a year is examined on the basis of spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements. The study elucidates and diagnoses the inhomogeneous physics of sea level change in space and frequency domain. At midlatitudes, large-scale sea level variability is primarily due to steric changes associated with the seasonal heating and cooling cycle of the surface layer. In comparison, changes in the tropics and high latitudes are mainly wind driven. Wind-driven variability exhibits a strong latitudinal dependence in itself. Wind-driven changes are largely baroclinic in the tropics but barotropic at higher latitudes. Baroclinic changes are dominated by the annual harmonic of the first baroclinic mode and is largest off the equator; variabilities associated with equatorial waves are smaller in comparison. Wind-driven barotropic changes exhibit a notable enhancement over several abyssal plains in the Southern Ocean, which is likely due to resonant planetary wave modes in basins semienclosed by discontinuities in potential vorticity. Otherwise, barotropic sea level changes are typically dominated by high frequencies with as much as half the total variance in periods shorter than 20 days, reflecting the frequency spectra of wind stress curl. Implications of the findings with regards to analyzing observations and data assimilation are discussed.

  16. Mapping spatial patterns of denitrifiers at large scales (Invited)

    Science.gov (United States)

    Philippot, L.; Ramette, A.; Saby, N.; Bru, D.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.

    2010-12-01

    Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 739 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.

  17. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    Science.gov (United States)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  18. Cosmic ray acceleration by large scale galactic shocks

    International Nuclear Information System (INIS)

    Cesarsky, C.J.; Lagage, P.O.

    1987-01-01

    The mechanism of diffusive shock acceleration may account for the existence of galactic cosmic rays detailed application to stellar wind shocks and especially to supernova shocks have been developed. Existing models can usually deal with the energetics or the spectral slope, but the observed energy range of cosmic rays is not explained. Therefore it seems worthwhile to examine the effect that large scale, long-lived galactic shocks may have on galactic cosmic rays, in the frame of the diffusive shock acceleration mechanism. Large scale fast shocks can only be expected to exist in the galactic halo. We consider three situations where they may arise: expansion of a supernova shock in the halo, galactic wind, galactic infall; and discuss the possible existence of these shocks and their role in accelerating cosmic rays

  19. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  20. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  1. Simulated pre-industrial climate in Bergen Climate Model (version 2: model description and large-scale circulation features

    Directory of Open Access Journals (Sweden)

    O. H. Otterå

    2009-11-01

    Full Text Available The Bergen Climate Model (BCM is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  2. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  3. Potential climatic impacts and reliability of large-scale offshore wind farms

    International Nuclear Information System (INIS)

    Wang Chien; Prinn, Ronald G

    2011-01-01

    The vast availability of wind power has fueled substantial interest in this renewable energy source as a potential near-zero greenhouse gas emission technology for meeting future world energy needs while addressing the climate change issue. However, in order to provide even a fraction of the estimated future energy needs, a large-scale deployment of wind turbines (several million) is required. The consequent environmental impacts, and the inherent reliability of such a large-scale usage of intermittent wind power would have to be carefully assessed, in addition to the need to lower the high current unit wind power costs. Our previous study (Wang and Prinn 2010 Atmos. Chem. Phys. 10 2053) using a three-dimensional climate model suggested that a large deployment of wind turbines over land to meet about 10% of predicted world energy needs in 2100 could lead to a significant temperature increase in the lower atmosphere over the installed regions. A global-scale perturbation to the general circulation patterns as well as to the cloud and precipitation distribution was also predicted. In the later study reported here, we conducted a set of six additional model simulations using an improved climate model to further address the potential environmental and intermittency issues of large-scale deployment of offshore wind turbines for differing installation areas and spatial densities. In contrast to the previous land installation results, the offshore wind turbine installations are found to cause a surface cooling over the installed offshore regions. This cooling is due principally to the enhanced latent heat flux from the sea surface to lower atmosphere, driven by an increase in turbulent mixing caused by the wind turbines which was not entirely offset by the concurrent reduction of mean wind kinetic energy. We found that the perturbation of the large-scale deployment of offshore wind turbines to the global climate is relatively small compared to the case of land

  4. Forest landscape models, a tool for understanding the effect of the large-scale and long-term landscape processes

    Science.gov (United States)

    Hong S. He; Robert E. Keane; Louis R. Iverson

    2008-01-01

    Forest landscape models have become important tools for understanding large-scale and long-term landscape (spatial) processes such as climate change, fire, windthrow, seed dispersal, insect outbreak, disease propagation, forest harvest, and fuel treatment, because controlled field experiments designed to study the effects of these processes are often not possible (...

  5. Development and application of a computer model for large-scale flame acceleration experiments

    International Nuclear Information System (INIS)

    Marx, K.D.

    1987-07-01

    A new computational model for large-scale premixed flames is developed and applied to the simulation of flame acceleration experiments. The primary objective is to circumvent the necessity for resolving turbulent flame fronts; this is imperative because of the relatively coarse computational grids which must be used in engineering calculations. The essence of the model is to artificially thicken the flame by increasing the appropriate diffusivities and decreasing the combustion rate, but to do this in such a way that the burn velocity varies with pressure, temperature, and turbulence intensity according to prespecified phenomenological characteristics. The model is particularly aimed at implementation in computer codes which simulate compressible flows. To this end, it is applied to the two-dimensional simulation of hydrogen-air flame acceleration experiments in which the flame speeds and gas flow velocities attain or exceed the speed of sound in the gas. It is shown that many of the features of the flame trajectories and pressure histories in the experiments are simulated quite well by the model. Using the comparison of experimental and computational results as a guide, some insight is developed into the processes which occur in such experiments. 34 refs., 25 figs., 4 tabs

  6. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    Science.gov (United States)

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright

  7. The use of remotely sensed soil moisture data in large-scale models of the hydrological cycle

    Science.gov (United States)

    Salomonson, V. V.; Gurney, R. J.; Schmugge, T. J.

    1985-01-01

    Manabe (1982) has reviewed numerical simulations of the atmosphere which provided a framework within which an examination of the dynamics of the hydrological cycle could be conducted. It was found that the climate is sensitive to soil moisture variability in space and time. The challenge arises now to improve the observations of soil moisture so as to provide up-dated boundary condition inputs to large scale models including the hydrological cycle. Attention is given to details regarding the significance of understanding soil moisture variations, soil moisture estimation using remote sensing, and energy and moisture balance modeling.

  8. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  9. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  10. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  11. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    Science.gov (United States)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  12. Imaging the Chicxulub central crater zone from large scale seismic acoustic wave propagation and gravity modeling

    Science.gov (United States)

    Fucugauchi, J. U.; Ortiz-Aleman, C.; Martin, R.

    2017-12-01

    Large complex craters are characterized by central uplifts that represent large-scale differential movement of deep basement from the transient cavity. Here we investigate the central sector of the large multiring Chicxulub crater, which has been surveyed by an array of marine, aerial and land-borne geophysical methods. Despite high contrasts in physical properties,contrasting results for the central uplift have been obtained, with seismic reflection surveys showing lack of resolution in the central zone. We develop an integrated seismic and gravity model for the main structural elements, imaging the central basement uplift and melt and breccia units. The 3-D velocity model built from interpolation of seismic data is validated using perfectly matched layer seismic acoustic wave propagation modeling, optimized at grazing incidence using shift in the frequency domain. Modeling shows significant lack of illumination in the central sector, masking presence of the central uplift. Seismic energy remains trapped in an upper low velocity zone corresponding to the sedimentary infill, melt/breccias and surrounding faulted blocks. After conversion of seismic velocities into a volume of density values, we use massive parallel forward gravity modeling to constrain the size and shape of the central uplift that lies at 4.5 km depth, providing a high-resolution image of crater structure.The Bouguer anomaly and gravity response of modeled units show asymmetries, corresponding to the crater structure and distribution of post-impact carbonates, breccias, melt and target sediments

  13. Large-scale effects of migration and conflict in pre-agricultural groups: Insights from a dynamic model.

    Directory of Open Access Journals (Sweden)

    Francesco Gargano

    Full Text Available The debate on the causes of conflict in human societies has deep roots. In particular, the extent of conflict in hunter-gatherer groups remains unclear. Some authors suggest that large-scale violence only arose with the spreading of agriculture and the building of complex societies. To shed light on this issue, we developed a model based on operatorial techniques simulating population-resource dynamics within a two-dimensional lattice, with humans and natural resources interacting in each cell of the lattice. The model outcomes under different conditions were compared with recently available demographic data for prehistoric South America. Only under conditions that include migration among cells and conflict was the model able to consistently reproduce the empirical data at a continental scale. We argue that the interplay between resource competition, migration, and conflict drove the population dynamics of South America after the colonization phase and before the introduction of agriculture. The relation between population and resources indeed emerged as a key factor leading to migration and conflict once the carrying capacity of the environment has been reached.

  14. A large scale field experiment in the Amazon Basin (Lambada/Bateristca)

    Energy Technology Data Exchange (ETDEWEB)

    Dolman, A.J.; Kabat, P.; Gash, J.H.C.; Noilhan, J.; Jochum, A.M.; Nobre, C. [Winand Staring Centre, Wageningen (Netherlands)

    1994-12-31

    A description is given of a large scale field experiment planned in the Amazon Basin, aiming to assess the large scale balances of energy, water and CO{sub 2}. The background for this experiment, the embedding in global change programmes of IGBP/BAHC and WCRP/GEWEX is described. A proposal by four European groups aimed at designing the experiment with the help of mesoscale models is described and a possible European input to this experiment is suggested. 24 refs., 1 app.

  15. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele; Attili, Antonio; Bisetti, Fabrizio; Elsinga, Gerrit E.

    2015-01-01

    from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  16. Challenges in Managing Trustworthy Large-scale Digital Science

    Science.gov (United States)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  17. Quantitative Missense Variant Effect Prediction Using Large-Scale Mutagenesis Data.

    Science.gov (United States)

    Gray, Vanessa E; Hause, Ronald J; Luebeck, Jens; Shendure, Jay; Fowler, Douglas M

    2018-01-24

    Large datasets describing the quantitative effects of mutations on protein function are becoming increasingly available. Here, we leverage these datasets to develop Envision, which predicts the magnitude of a missense variant's molecular effect. Envision combines 21,026 variant effect measurements from nine large-scale experimental mutagenesis datasets, a hitherto untapped training resource, with a supervised, stochastic gradient boosting learning algorithm. Envision outperforms other missense variant effect predictors both on large-scale mutagenesis data and on an independent test dataset comprising 2,312 TP53 variants whose effects were measured using a low-throughput approach. This dataset was never used for hyperparameter tuning or model training and thus serves as an independent validation set. Envision prediction accuracy is also more consistent across amino acids than other predictors. Finally, we demonstrate that Envision's performance improves as more large-scale mutagenesis data are incorporated. We precompute Envision predictions for every possible single amino acid variant in human, mouse, frog, zebrafish, fruit fly, worm, and yeast proteomes (https://envision.gs.washington.edu/). Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Distribution of ground rigidity and ground model for seismic response analysis in Hualian project of large scale seismic test

    International Nuclear Information System (INIS)

    Kokusho, T.; Nishi, K.; Okamoto, T.; Tanaka, Y.; Ueshima, T.; Kudo, K.; Kataoka, T.; Ikemi, M.; Kawai, T.; Sawada, Y.; Suzuki, K.; Yajima, K.; Higashi, S.

    1997-01-01

    An international joint research program called HLSST is proceeding. HLSST is large-scale seismic test (LSST) to investigate soil-structure interaction (SSI) during large earthquake in the field in Hualien, a high seismic region in Taiwan. A 1/4-scale model building was constructed on the gravelly soil in this site, and the backfill material of crushed stone was placed around the model plant after excavation for the construction. Also the model building and the foundation ground were extensively instrumental to monitor structure and ground response. To accurately evaluate SSI during earthquakes, geotechnical investigation and forced vibration test were performed during construction process namely before/after base excavation, after structure construction and after backfilling. And the distribution of the mechanical properties of the gravelly soil and the backfill are measured after the completion of the construction by penetration test and PS-logging etc. This paper describes the distribution and the change of the shear wave velocity (V s ) measured by the field test. Discussion is made on the effect of overburden pressure during the construction process on V s in the neighbouring soil and, further on the numerical soil model for SSI analysis. (orig.)

  19. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I. (University of California, San Diego); Winey, J. Michael (Washington State University); Gupta, Yogendra Mohan (Washington State University); Lane, J. Matthew D.; Ditmire, Todd (University of Texas at Austin); Quevedo, Hernan J. (University of Texas at Austin)

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  20. Large Scale Earth's Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model

    Science.gov (United States)

    Baraka, Suleiman

    2016-06-01

    In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ≈14.8 R E along the Sun-Earth line, and ≈29 R E on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured ≈2 c/ ω pi for Θ Bn = 90° and M MS = 4.7) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be (1.7 c/ ω pi ). In the foreshocked region, the thermal velocity is found equal to 213 km s-1 at 15 R E and is equal to 63 km s -1 at 12 R E (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.

  1. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah; Carns, Philip; Ross, Robert; Li, Jianping Kelvin; Ma, Kwan-Liu

    2016-11-13

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a

  2. Linking genes to ecosystem trace gas fluxes in a large-scale model system

    Science.gov (United States)

    Meredith, L. K.; Cueva, A.; Volkmann, T. H. M.; Sengupta, A.; Troch, P. A.

    2017-12-01

    Soil microorganisms mediate biogeochemical cycles through biosphere-atmosphere gas exchange with significant impact on atmospheric trace gas composition. Improving process-based understanding of these microbial populations and linking their genomic potential to the ecosystem-scale is a challenge, particularly in soil systems, which are heterogeneous in biodiversity, chemistry, and structure. In oligotrophic systems, such as the Landscape Evolution Observatory (LEO) at Biosphere 2, atmospheric trace gas scavenging may supply critical metabolic needs to microbial communities, thereby promoting tight linkages between microbial genomics and trace gas utilization. This large-scale model system of three initially homogenous and highly instrumented hillslopes facilitates high temporal resolution characterization of subsurface trace gas fluxes at hundreds of sampling points, making LEO an ideal location to study microbe-mediated trace gas fluxes from the gene to ecosystem scales. Specifically, we focus on the metabolism of ubiquitous atmospheric reduced trace gases hydrogen (H2), carbon monoxide (CO), and methane (CH4), which may have wide-reaching impacts on microbial community establishment, survival, and function. Additionally, microbial activity on LEO may facilitate weathering of the basalt matrix, which can be studied with trace gas measurements of carbonyl sulfide (COS/OCS) and carbon dioxide (O-isotopes in CO2), and presents an additional opportunity for gene to ecosystem study. This work will present initial measurements of this suite of trace gases to characterize soil microbial metabolic activity, as well as links between spatial and temporal variability of microbe-mediated trace gas fluxes in LEO and their relation to genomic-based characterization of microbial community structure (phylogenetic amplicons) and genetic potential (metagenomics). Results from the LEO model system will help build understanding of the importance of atmospheric inputs to

  3. Large-scale runoff generation – parsimonious parameterisation using high-resolution topography

    Directory of Open Access Journals (Sweden)

    L. Gong

    2011-08-01

    Full Text Available World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm

  4. Recent Advances in Understanding Large Scale Vapour Explosions

    International Nuclear Information System (INIS)

    Board, S.J.; Hall, R.W.

    1976-01-01

    In foundries, violent explosions occur occasionally when molten metal comes into contact with water. If similar explosions can occur with other materials, hazardous situations may arise for example in LNG marine transportation accidents, or in liquid cooled reactor incidents when molten UO 2 contacts water or sodium coolant. Over the last 10 years a large body of experimental data has been obtained on the behaviour of small quantities of hot material in contact with a vaporisable coolant. Such experiments generally give low energy yields, despite producing fine fragmentation of the molten material. These events have been interpreted in terms of a wide range of phenomena such as violent boiling, liquid entrainment, bubble collapse, superheat, surface cracking and many others. Many of these studies have been aimed at understanding the small scale behaviour of the particular materials of interest. However, understanding the nature of the energetic events which were the original cause for concern may also be necessary to give confidence that violent events cannot occur for these materials in large scale situations. More recently, there has been a trend towards larger experiments and some of these have produced explosions of moderately high efficiency. Although occurrence of such large scale explosions can depend rather critically on initial conditions in a way which is not fully understood, there are signs that the interpretation of these events may be more straightforward than that of the single drop experiments. In the last two years several theoretical models for large scale explosions have appeared which attempt a self contained explanation of at least some stages of such high yield events: these have as their common feature a description of how a propagating breakdown of an initially quasi-stable distribution of materials is induced by the pressure and flow field caused by the energy release in adjacent regions. These models have led to the idea that for a full

  5. Beyond single syllables: large-scale modeling of reading aloud with the Connectionist Dual Process (CDP++) model.

    Science.gov (United States)

    Perry, Conrad; Ziegler, Johannes C; Zorzi, Marco

    2010-09-01

    Most words in English have more than one syllable, yet the most influential computational models of reading aloud are restricted to processing monosyllabic words. Here, we present CDP++, a new version of the Connectionist Dual Process model (Perry, Ziegler, & Zorzi, 2007). CDP++ is able to simulate the reading aloud of mono- and disyllabic words and nonwords, and learns to assign stress in exactly the same way as it learns to associate graphemes with phonemes. CDP++ is able to simulate the monosyllabic benchmark effects its predecessor could, and therefore shows full backwards compatibility. CDP++ also accounts for a number of novel effects specific to disyllabic words, including the effects of stress regularity and syllable number. In terms of database performance, CDP++ accounts for over 49% of the reaction time variance on items selected from the English Lexicon Project, a very large database of several thousand of words. With its lexicon of over 32,000 words, CDP++ is therefore a notable example of the successful scaling-up of a connectionist model to a size that more realistically approximates the human lexical system. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Impact of large-scale tides on cosmological distortions via redshift-space power spectrum

    Science.gov (United States)

    Akitsu, Kazuyuki; Takada, Masahiro

    2018-03-01

    Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.

  7. On the scale similarity in large eddy simulation. A proposal of a new model

    International Nuclear Information System (INIS)

    Pasero, E.; Cannata, G.; Gallerano, F.

    2004-01-01

    Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The

  8. Large-Scale Traveling Weather Systems in Mars’ Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-10-01

    Between late fall and early spring, Mars’ middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  9. Large-Scale Traveling Weather Systems in Mars Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-01-01

    Between late fall and early spring, Mars' middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  10. Icing Simulation Research Supporting the Ice-Accretion Testing of Large-Scale Swept-Wing Models

    Science.gov (United States)

    Yadlin, Yoram; Monnig, Jaime T.; Malone, Adam M.; Paul, Bernard P.

    2018-01-01

    The work summarized in this report is a continuation of NASA's Large-Scale, Swept-Wing Test Articles Fabrication; Research and Test Support for NASA IRT contract (NNC10BA05 -NNC14TA36T) performed by Boeing under the NASA Research and Technology for Aerospace Propulsion Systems (RTAPS) contract. In the study conducted under RTAPS, a series of icing tests in the Icing Research Tunnel (IRT) have been conducted to characterize ice formations on large-scale swept wings representative of modern commercial transport airplanes. The outcome of that campaign was a large database of ice-accretion geometries that can be used for subsequent aerodynamic evaluation in other experimental facilities and for validation of ice-accretion prediction codes.

  11. Large-scale hydrological modeling for calculating water stress indices: implications of improved spatiotemporal resolution, surface-groundwater differentiation, and uncertainty characterization.

    Science.gov (United States)

    Scherer, Laura; Venkatesh, Aranya; Karuppiah, Ramkumar; Pfister, Stephan

    2015-04-21

    Physical water scarcities can be described by water stress indices. These are often determined at an annual scale and a watershed level; however, such scales mask seasonal fluctuations and spatial heterogeneity within a watershed. In order to account for this level of detail, first and foremost, water availability estimates must be improved and refined. State-of-the-art global hydrological models such as WaterGAP and UNH/GRDC have previously been unable to reliably reflect water availability at the subbasin scale. In this study, the Soil and Water Assessment Tool (SWAT) was tested as an alternative to global models, using the case study of the Mississippi watershed. While SWAT clearly outperformed the global models at the scale of a large watershed, it was judged to be unsuitable for global scale simulations due to the high calibration efforts required. The results obtained in this study show that global assessments miss out on key aspects related to upstream/downstream relations and monthly fluctuations, which are important both for the characterization of water scarcity in the Mississippi watershed and for water footprints. Especially in arid regions, where scarcity is high, these models provide unsatisfying results.

  12. Studies on combined model based on functional objectives of large scale complex engineering

    Science.gov (United States)

    Yuting, Wang; Jingchun, Feng; Jiabao, Sun

    2018-03-01

    As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.

  13. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  14. Global scale groundwater flow model

    Science.gov (United States)

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  15. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  16. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  17. Sensitivity of the scale partition for variational multiscale large-eddy simulation of channel flow

    NARCIS (Netherlands)

    Holmen, J.; Hughes, T.J.R.; Oberai, A.A.; Wells, G.N.

    2004-01-01

    The variational multiscale method has been shown to perform well for large-eddy simulation (LES) of turbulent flows. The method relies upon a partition of the resolved velocity field into large- and small-scale components. The subgrid model then acts only on the small scales of motion, unlike

  18. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  19. Sensitivity of local air quality to the interplay between small- and large-scale circulations: a large-eddy simulation study

    Science.gov (United States)

    Wolf-Grosse, Tobias; Esau, Igor; Reuder, Joachim

    2017-06-01

    Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s-1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES) experiments with the Parallelised Large-Eddy Simulation Model (PALM) for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a relatively small local water

  20. Sensitivity of local air quality to the interplay between small- and large-scale circulations: a large-eddy simulation study

    Directory of Open Access Journals (Sweden)

    T. Wolf-Grosse

    2017-06-01

    Full Text Available Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s−1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES experiments with the Parallelised Large-Eddy Simulation Model (PALM for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a

  1. The Large-scale Effect of Environment on Galactic Conformity

    Science.gov (United States)

    Sun, Shuangpeng; Guo, Qi; Wang, Lan; Wang, Jie; Gao, Liang; Lacey, Cedric G.; Pan, Jun

    2018-04-01

    We use a volume-limited galaxy sample from the SDSS Data Release 7 to explore the dependence of galactic conformity on the large-scale environment, measured on ˜ 4 Mpc scales. We find that the star formation activity of neighbour galaxies depends more strongly on the environment than on the activity of their primary galaxies. In under-dense regions most neighbour galaxies tend to be active, while in over-dense regions neighbour galaxies are mostly passive, regardless of the activity of their primary galaxies. At a given stellar mass, passive primary galaxies reside in higher density regions than active primary galaxies, leading to the apparently strong conformity signal. The dependence of the activity of neighbour galaxies on environment can be explained by the corresponding dependence of the fraction of satellite galaxies. Similar results are found for galaxies in a semi-analytical model, suggesting that no new physics is required to explain the observed large-scale conformity.

  2. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    Science.gov (United States)

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  3. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  4. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  5. Development of a self-consistent lightning NOx simulation in large-scale 3-D models

    Science.gov (United States)

    Luo, Chao; Wang, Yuhang; Koshak, William J.

    2017-03-01

    We seek to develop a self-consistent representation of lightning NOx (LNOx) simulation in a large-scale 3-D model. Lightning flash rates are parameterized functions of meteorological variables related to convection. We examine a suite of such variables and find that convective available potential energy and cloud top height give the best estimates compared to July 2010 observations from ground-based lightning observation networks. Previous models often use lightning NOx vertical profiles derived from cloud-resolving model simulations. An implicit assumption of such an approach is that the postconvection lightning NOx vertical distribution is the same for all deep convection, regardless of geographic location, time of year, or meteorological environment. Detailed observations of the lightning channel segment altitude distribution derived from the NASA Lightning Nitrogen Oxides Model can be used to obtain the LNOx emission profile. Coupling such a profile with model convective transport leads to a more self-consistent lightning distribution compared to using prescribed postconvection profiles. We find that convective redistribution appears to be a more important factor than preconvection LNOx profile selection, providing another reason for linking the strength of convective transport to LNOx distribution.

  6. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  7. Towards Large-area Field-scale Operational Evapotranspiration for Water Use Mapping

    Science.gov (United States)

    Senay, G. B.; Friedrichs, M.; Morton, C.; Huntington, J. L.; Verdin, J.

    2017-12-01

    Field-scale evapotranspiration (ET) estimates are needed for improving surface and groundwater use and water budget studies. Ideally, field-scale ET estimates would be at regional to national levels and cover long time periods. As a result of large data storage and computational requirements associated with processing field-scale satellite imagery such as Landsat, numerous challenges remain to develop operational ET estimates over large areas for detailed water use and availability studies. However, the combination of new science, data availability, and cloud computing technology is enabling unprecedented capabilities for ET mapping. To demonstrate this capability, we used Google's Earth Engine cloud computing platform to create nationwide annual ET estimates with 30-meter resolution Landsat ( 16,000 images) and gridded weather data using the Operational Simplified Surface Energy Balance (SSEBop) model in support of the National Water Census, a USGS research program designed to build decision support capacity for water management agencies and other natural resource managers. By leveraging Google's Earth Engine Application Programming Interface (API) and developing software in a collaborative, open-platform environment, we rapidly advance from research towards applications for large-area field-scale ET mapping. Cloud computing of the Landsat image archive combined with other satellite, climate, and weather data, is creating never imagined opportunities for assessing ET model behavior and uncertainty, and ultimately providing the ability for more robust operational monitoring and assessment of water use at field-scales.

  8. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  9. Open source large-scale high-resolution environmental modelling with GEMS

    NARCIS (Netherlands)

    Baarsma, R.J.; Alberti, K.; Marra, W.A.; Karssenberg, D.J.

    2016-01-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however,

  10. Large-scale hydrological modelling in the semi-arid north-east of Brazil

    Science.gov (United States)

    Güntner, Andreas

    2002-07-01

    the framework of an integrated model which contains modules that do not work on the basis of natural spatial units. The target units mentioned above are disaggregated in Wasa into smaller modelling units within a new multi-scale, hierarchical approach. The landscape units defined in this scheme capture in particular the effect of structured variability of terrain, soil and vegetation characteristics along toposequences on soil moisture and runoff generation. Lateral hydrological processes at the hillslope scale, as reinfiltration of surface runoff, being of particular importance in semi-arid environments, can thus be represented also within the large-scale model in a simplified form. Depending on the resolution of available data, small-scale variability is not represented explicitly with geographic reference in Wasa, but by the distribution of sub-scale units and by statistical transition frequencies for lateral fluxes between these units. Further model components of Wasa which respect specific features of semi-arid hydrology are: (1) A two-layer model for evapotranspiration comprises energy transfer at the soil surface (including soil evaporation), which is of importance in view of the mainly sparse vegetation cover. Additionally, vegetation parameters are differentiated in space and time in dependence on the occurrence of the rainy season. (2) The infiltration module represents in particular infiltration-excess surface runoff as the dominant runoff component. (3) For the aggregate description of the water balance of reservoirs that cannot be represented explicitly in the model, a storage approach respecting different reservoirs size classes and their interaction via the river network is applied. (4) A model for the quantification of water withdrawal by water use in different sectors is coupled to Wasa. (5) A cascade model for the temporal disaggregation of precipitation time series, adapted to the specific characteristics of tropical convective rainfall, is applied

  11. Wind Tunnel Testing of a 120th Scale Large Civil Tilt-Rotor Model in Airplane and Helicopter Modes

    Science.gov (United States)

    Theodore, Colin R.; Willink, Gina C.; Russell, Carl R.; Amy, Alexander R.; Pete, Ashley E.

    2014-01-01

    In April 2012 and October 2013, NASA and the U.S. Army jointly conducted a wind tunnel test program examining two notional large tilt rotor designs: NASA's Large Civil Tilt Rotor and the Army's High Efficiency Tilt Rotor. The approximately 6%-scale airframe models (unpowered) were tested without rotors in the U.S. Army 7- by 10-foot wind tunnel at NASA Ames Research Center. Measurements of all six forces and moments acting on the airframe were taken using the wind tunnel scale system. In addition to force and moment measurements, flow visualization using tufts, infrared thermography and oil flow were used to identify flow trajectories, boundary layer transition and areas of flow separation. The purpose of this test was to collect data for the validation of computational fluid dynamics tools, for the development of flight dynamics simulation models, and to validate performance predictions made during conceptual design. This paper focuses on the results for the Large Civil Tilt Rotor model in an airplane mode configuration up to 200 knots of wind tunnel speed. Results are presented with the full airframe model with various wing tip and nacelle configurations, and for a wing-only case also with various wing tip and nacelle configurations. Key results show that the addition of a wing extension outboard of the nacelles produces a significant increase in the lift-to-drag ratio, and interestingly decreases the drag compared to the case where the wing extension is not present. The drag decrease is likely due to complex aerodynamic interactions between the nacelle and wing extension that results in a significant drag benefit.

  12. Development of a Shipboard Remote Control and Telemetry Experimental System for Large-Scale Model's Motions and Loads Measurement in Realistic Sea Waves.

    Science.gov (United States)

    Jiao, Jialong; Ren, Huilong; Adenya, Christiaan Adika; Chen, Chaohe

    2017-10-29

    Wave-induced motion and load responses are important criteria for ship performance evaluation. Physical experiments have long been an indispensable tool in the predictions of ship's navigation state, speed, motions, accelerations, sectional loads and wave impact pressure. Currently, majority of the experiments are conducted in laboratory tank environment, where the wave environments are different from the realistic sea waves. In this paper, a laboratory tank testing system for ship motions and loads measurement is reviewed and reported first. Then, a novel large-scale model measurement technique is developed based on the laboratory testing foundations to obtain accurate motion and load responses of ships in realistic sea conditions. For this purpose, a suite of advanced remote control and telemetry experimental system was developed in-house to allow for the implementation of large-scale model seakeeping measurement at sea. The experimental system includes a series of technique sensors, e.g., the Global Position System/Inertial Navigation System (GPS/INS) module, course top, optical fiber sensors, strain gauges, pressure sensors and accelerometers. The developed measurement system was tested by field experiments in coastal seas, which indicates that the proposed large-scale model testing scheme is capable and feasible. Meaningful data including ocean environment parameters, ship navigation state, motions and loads were obtained through the sea trial campaign.

  13. Large-scale compositional heterogeneity in the Earth's mantle

    Science.gov (United States)

    Ballmer, M.

    2017-12-01

    Seismic imaging of subducted Farallon and Tethys lithosphere in the lower mantle has been taken as evidence for whole-mantle convection, and efficient mantle mixing. However, cosmochemical constraints point to a lower-mantle composition that has a lower Mg/Si compared to upper-mantle pyrolite. Moreover, geochemical signatures of magmatic rocks indicate the long-term persistence of primordial reservoirs somewhere in the mantle. In this presentation, I establish geodynamic mechanisms for sustaining large-scale (primordial) heterogeneity in the Earth's mantle using numerical models. Mantle flow is controlled by rock density and viscosity. Variations in intrinsic rock density, such as due to heterogeneity in basalt or iron content, can induce layering or partial layering in the mantle. Layering can be sustained in the presence of persistent whole mantle convection due to active "unmixing" of heterogeneity in low-viscosity domains, e.g. in the transition zone or near the core-mantle boundary [1]. On the other hand, lateral variations in intrinsic rock viscosity, such as due to heterogeneity in Mg/Si, can strongly affect the mixing timescales of the mantle. In the extreme case, intrinsically strong rocks may remain unmixed through the age of the Earth, and persist as large-scale domains in the mid-mantle due to focusing of deformation along weak conveyor belts [2]. That large-scale lateral heterogeneity and/or layering can persist in the presence of whole-mantle convection can explain the stagnation of some slabs, as well as the deflection of some plumes, in the mid-mantle. These findings indeed motivate new seismic studies for rigorous testing of model predictions. [1] Ballmer, M. D., N. C. Schmerr, T. Nakagawa, and J. Ritsema (2015), Science Advances, doi:10.1126/sciadv.1500815. [2] Ballmer, M. D., C. Houser, J. W. Hernlund, R. Wentzcovitch, and K. Hirose (2017), Nature Geoscience, doi:10.1038/ngeo2898.

  14. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  15. A comparison of large scale changes in surface humidity over land in observations and CMIP3 general circulation models

    International Nuclear Information System (INIS)

    Willett, Katharine M; Thorne, Peter W; Jones, Philip D; Gillett, Nathan P

    2010-01-01

    Observed changes in the HadCRUH global land surface specific humidity and CRUTEM3 surface temperature from 1973 to 1999 are compared to CMIP3 archive climate model simulations with 20th Century forcings. Observed humidity increases are proportionately largest in the Northern Hemisphere, especially in winter. At the largest spatio-temporal scales moistening is close to the Clausius-Clapeyron scaling of the saturated specific humidity (∼7% K -1 ). At smaller scales in water-limited regions, changes in specific humidity are strongly inversely correlated with total changes in temperature. Conversely, in some regions increases are faster than implied by the Clausius-Clapeyron relation. The range of climate model specific humidity seasonal climatology and variance encompasses the observations. The models also reproduce the magnitude of observed interannual variance over all large regions. Observed and modelled trends and temperature-humidity relationships are comparable except for the extratropical Southern Hemisphere where observations exhibit no trend but models exhibit moistening. This may arise from: long-term biases remaining in the observations; the relative paucity of observational coverage; or common model errors. The overall degree of consistency of anthropogenically forced models with the observations is further evidence for anthropogenic influence on the climate of the late 20th century.

  16. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  17. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  18. Large-scale influences in near-wall turbulence.

    Science.gov (United States)

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  19. Simple concentration-dependent pair interaction model for large-scale simulations of Fe-Cr alloys

    International Nuclear Information System (INIS)

    Levesque, Maximilien; Martinez, Enrique; Fu, Chu-Chun; Nastar, Maylise; Soisson, Frederic

    2011-01-01

    This work is motivated by the need for large-scale simulations to extract physical information on the iron-chromium system that is a binary model alloy for ferritic steels used or proposed in many nuclear applications. From first-principles calculations and the experimental critical temperature we build a new energetic rigid lattice model based on pair interactions with concentration and temperature dependence. Density functional theory calculations in both norm-conserving and projector augmented-wave approaches have been performed. A thorough comparison of these two different ab initio techniques leads to a robust parametrization of the Fe-Cr Hamiltonian. Mean-field approximations and Monte Carlo calculations are then used to account for temperature effects. The predictions of the model are in agreement with the most recent phase diagram at all temperatures and compositions. The solubility of Cr in Fe below 700 K remains in the range of about 6 to 12%. It reproduces the transition between the ordering and demixing tendency and the spinodal decomposition limits are also in agreement with the values given in the literature.

  20. Two-dimensional divertor modeling and scaling laws

    International Nuclear Information System (INIS)

    Catto, P.J.; Connor, J.W.; Knoll, D.A.

    1996-01-01

    Two-dimensional numerical models of divertors contain large numbers of dimensionless parameters that must be varied to investigate all operating regimes of interest. To simplify the task and gain insight into divertor operation, we employ similarity techniques to investigate whether model systems of equations plus boundary conditions in the steady state admit scaling transformations that lead to useful divertor similarity scaling laws. A short mean free path neutral-plasma model of the divertor region below the x-point is adopted in which all perpendicular transport is due to the neutrals. We illustrate how the results can be used to benchmark large computer simulations by employing a modified version of UEDGE which contains a neutral fluid model. (orig.)

  1. Analysis for preliminary evaluation of discrete fracture flow and large-scale permeability in sedimentary rocks

    International Nuclear Information System (INIS)

    Kanehiro, B.Y.; Lai, C.H.; Stow, S.H.

    1987-05-01

    Conceptual models for sedimentary rock settings that could be used in future evaluation and suitability studies are being examined through the DOE Repository Technology Program. One area of concern for the hydrologic aspects of these models is discrete fracture flow analysis as related to the estimation of the size of the representative elementary volume, evaluation of the appropriateness of continuum assumptions and estimation of the large-scale permeabilities of sedimentary rocks. A basis for preliminary analysis of flow in fracture systems of the types that might be expected to occur in low permeability sedimentary rocks is presented. The approach used involves numerical modeling of discrete fracture flow for the configuration of a large-scale hydrologic field test directed at estimation of the size of the representative elementary volume and large-scale permeability. Analysis of fracture data on the basis of this configuration is expected to provide a preliminary indication of the scale at which continuum assumptions can be made

  2. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  3. Parallelizing Gene Expression Programming Algorithm in Enabling Large-Scale Classification

    Directory of Open Access Journals (Sweden)

    Lixiong Xu

    2017-01-01

    Full Text Available As one of the most effective function mining algorithms, Gene Expression Programming (GEP algorithm has been widely used in classification, pattern recognition, prediction, and other research fields. Based on the self-evolution, GEP is able to mine an optimal function for dealing with further complicated tasks. However, in big data researches, GEP encounters low efficiency issue due to its long time mining processes. To improve the efficiency of GEP in big data researches especially for processing large-scale classification tasks, this paper presents a parallelized GEP algorithm using MapReduce computing model. The experimental results show that the presented algorithm is scalable and efficient for processing large-scale classification tasks.

  4. The Transition to Large-scale Cosmic Homogeneity in the WiggleZ Dark Energy Survey

    Science.gov (United States)

    Scrimgeour, Morag; Davis, T.; Blake, C.; James, B.; Poole, G. B.; Staveley-Smith, L.; Dark Energy Survey, WiggleZ

    2013-01-01

    The most fundamental assumption of the standard cosmological model (ΛCDM) is that the universe is homogeneous on large scales. This is clearly not true on small scales, where clusters and voids exist, and some studies seem to suggest that galaxies follow a fractal distribution up to very large scales 200 h-1 Mpc or more), whereas the ΛCDM model predicts transition to homogeneity at scales of ~100 h-1 Mpc. Any cosmological measurements made below the scale of homogeneity (such as the power spectrum) could be misleading, so it is crucial to measure the scale of homogeneity in the Universe. We have used the WiggleZ Dark Energy Survey to make the largest volume measurement to date of the transition to homogeneity in the galaxy distribution. WiggleZ is a UV-selected spectroscopic survey of ~200,000 luminous blue galaxies up to z=1, made with the Anglo-Australian Telescope. We have corrected for survey incompleteness using random catalogues that account for the various survey selection criteria, and tested the robustness of our results using a suite of fractal mock catalogues. The large volume and depth of WiggleZ allows us to probe the transition of the galaxy distribution to homogeneity on large scales and over several epochs, and see if this is consistent with a ΛCDM prediction.

  5. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto; Watson, James R.; Jö nsson, Bror; Gasol, Josep M.; Salazar, Guillem; Acinas, Silvia G.; Estrada, Marta; Massana, Ramó n; Logares, Ramiro; Giner, Caterina R.; Pernice, Massimo C.; Olivar, M. Pilar; Citores, Leire; Corell, Jon; Rodrí guez-Ezpeleta, Naiara; Acuñ a, José Luis; Molina-Ramí rez, Axayacatl; Gonzá lez-Gordillo, J. Ignacio; Có zar, André s; Martí , Elisa; Cuesta, José A.; Agusti, Susana; Fraile-Nuez, Eugenio; Duarte, Carlos M.; Irigoien, Xabier; Chust, Guillem

    2018-01-01

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  6. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto

    2018-01-04

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  7. Large-scale bioenergy production: how to resolve sustainability trade-offs?

    Science.gov (United States)

    Humpenöder, Florian; Popp, Alexander; Bodirsky, Benjamin Leon; Weindl, Isabelle; Biewald, Anne; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Klein, David; Kreidenweis, Ulrich; Müller, Christoph; Rolinski, Susanne; Stevanovic, Miodrag

    2018-02-01

    Large-scale 2nd generation bioenergy deployment is a key element of 1.5 °C and 2 °C transformation pathways. However, large-scale bioenergy production might have negative sustainability implications and thus may conflict with the Sustainable Development Goal (SDG) agenda. Here, we carry out a multi-criteria sustainability assessment of large-scale bioenergy crop production throughout the 21st century (300 EJ in 2100) using a global land-use model. Our analysis indicates that large-scale bioenergy production without complementary measures results in negative effects on the following sustainability indicators: deforestation, CO2 emissions from land-use change, nitrogen losses, unsustainable water withdrawals and food prices. One of our main findings is that single-sector environmental protection measures next to large-scale bioenergy production are prone to involve trade-offs among these sustainability indicators—at least in the absence of more efficient land or water resource use. For instance, if bioenergy production is accompanied by forest protection, deforestation and associated emissions (SDGs 13 and 15) decline substantially whereas food prices (SDG 2) increase. However, our study also shows that this trade-off strongly depends on the development of future food demand. In contrast to environmental protection measures, we find that agricultural intensification lowers some side-effects of bioenergy production substantially (SDGs 13 and 15) without generating new trade-offs—at least among the sustainability indicators considered here. Moreover, our results indicate that a combination of forest and water protection schemes, improved fertilization efficiency, and agricultural intensification would reduce the side-effects of bioenergy production most comprehensively. However, although our study includes more sustainability indicators than previous studies on bioenergy side-effects, our study represents only a small subset of all indicators relevant for the

  8. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  9. Large scale integration of photovoltaics in cities

    International Nuclear Information System (INIS)

    Strzalka, Aneta; Alam, Nazmul; Duminil, Eric; Coors, Volker; Eicker, Ursula

    2012-01-01

    Highlights: ► We implement the photovoltaics on a large scale. ► We use three-dimensional modelling for accurate photovoltaic simulations. ► We consider the shadowing effect in the photovoltaic simulation. ► We validate the simulated results using detailed hourly measured data. - Abstract: For a large scale implementation of photovoltaics (PV) in the urban environment, building integration is a major issue. This includes installations on roof or facade surfaces with orientations that are not ideal for maximum energy production. To evaluate the performance of PV systems in urban settings and compare it with the building user’s electricity consumption, three-dimensional geometry modelling was combined with photovoltaic system simulations. As an example, the modern residential district of Scharnhauser Park (SHP) near Stuttgart/Germany was used to calculate the potential of photovoltaic energy and to evaluate the local own consumption of the energy produced. For most buildings of the district only annual electrical consumption data was available and only selected buildings have electronic metering equipment. The available roof area for one of these multi-family case study buildings was used for a detailed hourly simulation of the PV power production, which was then compared to the hourly measured electricity consumption. The results were extrapolated to all buildings of the analyzed area by normalizing them to the annual consumption data. The PV systems can produce 35% of the quarter’s total electricity consumption and half of this generated electricity is directly used within the buildings.

  10. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  11. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  12. Large-scale demonstration of waste solidification in saltstone

    International Nuclear Information System (INIS)

    McIntyre, P.F.; Oblath, S.B.; Wilhite, E.L.

    1988-05-01

    The saltstone lysimeters are a large scale demonstration of a disposal concept for decontaminated salt solution resulting from in-tank processing of defense waste. The lysimeter experiment has provided data on the leaching behavior of large saltstone monoliths under realistic field conditions. The results also will be used to compare the effect of capping the wasteform on contaminant release. Biweekly monitoring of sump leachate from three lysimeters has continued on a routine basis for approximately 3 years. An uncapped lysimeter has shown the highest levels of nitrate and 99 Tc release. Gravel and clay capped lysimeters have shown levels equivalent to or slightly higher than background rainwater levels. Mathematical model predictions have been compared to lysimeter results. The models will be applied to predict the impact of saltstone disposal on groundwater quality. 9 refs., 5 figs., 3 tabs

  13. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  14. Large scale modulation of high frequency acoustic waves in periodic porous media.

    Science.gov (United States)

    Boutin, Claude; Rallu, Antoine; Hans, Stephane

    2012-12-01

    This paper deals with the description of the modulation at large scale of high frequency acoustic waves in gas saturated periodic porous media. High frequencies mean local dynamics at the pore scale and therefore absence of scale separation in the usual sense of homogenization. However, although the pressure is spatially varying in the pores (according to periodic eigenmodes), the mode amplitude can present a large scale modulation, thereby introducing another type of scale separation to which the asymptotic multi-scale procedure applies. The approach is first presented on a periodic network of inter-connected Helmholtz resonators. The equations governing the modulations carried by periodic eigenmodes, at frequencies close to their eigenfrequency, are derived. The number of cells on which the carrying periodic mode is defined is therefore a parameter of the modeling. In a second part, the asymptotic approach is developed for periodic porous media saturated by a perfect gas. Using the "multicells" periodic condition, one obtains the family of equations governing the amplitude modulation at large scale of high frequency waves. The significant difference between modulations of simple and multiple mode are evidenced and discussed. The features of the modulation (anisotropy, width of frequency band) are also analyzed.

  15. Modelling the resilience of rail passenger transport networks affected by large-scale disruptive events : the case of HSR (high speed rail)

    NARCIS (Netherlands)

    Janic, M.

    2018-01-01

    This paper deals with modelling the dynamic resilience of rail passenger transport networks affected by large-scale disruptive events whose impacts deteriorate the networks’ planned infrastructural, operational, economic, and social-economic performances represented by the selected indicators.

  16. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balances

    Science.gov (United States)

    Tang, G.; Bartlein, P. J.

    2012-08-01

    Satellite-based data, such as vegetation type and fractional vegetation cover, are widely used in hydrologic models to prescribe the vegetation state in a study region. Dynamic global vegetation models (DGVM) simulate land surface hydrology. Incorporation of satellite-based data into a DGVM may enhance a model's ability to simulate land surface hydrology by reducing the task of model parameterization and providing distributed information on land characteristics. The objectives of this study are to (i) modify a DGVM for simulating land surface water balances; (ii) evaluate the modified model in simulating actual evapotranspiration (ET), soil moisture, and surface runoff at regional or watershed scales; and (iii) gain insight into the ability of both the original and modified model to simulate large spatial scale land surface hydrology. To achieve these objectives, we introduce the "LPJ-hydrology" (LH) model which incorporates satellite-based data into the Lund-Potsdam-Jena (LPJ) DGVM. To evaluate the model we ran LH using historical (1981-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells for the conterminous US and for the entire world using coarser climate and land cover data. We evaluated the simulated ET, soil moisture, and surface runoff using a set of observed or simulated data at different spatial scales. Our results demonstrate that spatial patterns of LH-simulated annual ET and surface runoff are in accordance with previously published data for the US; LH-modeled monthly stream flow for 12 major rivers in the US was consistent with observed values respectively during the years 1981-2006 (R2 > 0.46, p 0.52). The modeled mean annual discharges for 10 major rivers worldwide also agreed well (differences day method for snowmelt computation, the addition of the solar radiation effect on snowmelt enabled LH to better simulate monthly stream flow in winter and early spring for rivers located at mid-to-high latitudes. In addition, LH-modeled

  17. The large scale microwave background anisotropy in decaying particle cosmology

    International Nuclear Information System (INIS)

    Panek, M.

    1987-06-01

    We investigate the large-scale anisotropy of the microwave background radiation in cosmological models with decaying particles. The observed value of the quadrupole moment combined with other constraints gives an upper limit on the redshift of the decay z/sub d/ < 3-5. 12 refs., 2 figs

  18. Large-scale ground motion simulation using GPGPU

    Science.gov (United States)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  19. Coordinated reset stimulation in a large-scale model of the STN-GPe circuit

    Directory of Open Access Journals (Sweden)

    Martin eEbert

    2014-11-01

    Full Text Available Synchronization of populations of neurons is a hallmark of several brain diseases. Coordinated reset (CR stimulation is a model-based stimulation technique which specifically counteracts abnormal synchrony by desynchronization. Electrical CR stimulation, e.g. for the treatment of Parkinson’s disease (PD, is administered via depth electrodes. In order to get a deeper understanding of this technique, we extended the top-down approach of previous studies and constructed a large-scale computational model of the respective brain areas. Furthermore, we took into account the spatial anatomical properties of the simulated brain structures and incor- porated a detailed numerical representation of 2·104 simulated neurons. We simulated the subthalamic nucleus (STN and the globus pallidus externus (GPe. Connections within the STN were governed by spike-timing dependent plasticity (STDP. In this way, we modeled the physiological and pathological activity of the considered brain structures. In particular, we investigated how plasticity could be exploited and how the model could be shifted from strongly synchronized (pathological activity to strongly desynchronized (healthy activity of the neuronal populations via CR stimulation of the STN neurons. Furthermore, we investigated the impact of specific stimulation parameters especially the electrode position on the stimulation outcome. Our model provides a step forward towards a biophysically realistic model of the brain areas relevant to the emergence of pathological neuronal activity in PD. Furthermore, our model constitutes a test bench for the optimization of both stimulation parameters and novel electrode geometries for efficient CR stimulation.

  20. Models of large-scale magnetic fields in stellar interiors. Application to solar and ap stars

    International Nuclear Information System (INIS)

    Duez, Vincent

    2009-01-01

    Stellar astrophysics needs today new models of large-scale magnetic fields, which are observed through spectropolarimetry at the surface of Ap/Bp stars, and thought to be an explanation for the uniform rotation of the solar radiation zone, deduced from helio seismic inversions. During my PhD, I focused on describing the possible magnetic equilibria in stellar interiors. The found configurations are mixed poloidal-toroidal, and minimize the energy for a given helicity, in analogy with Taylor states encountered in spheromaks. Taking into account the self-gravity leads us to the 'non force-free' equilibria family, that will thus influence the stellar structure. I derived all the physical quantities associated with the magnetic field; then I evaluated the perturbations they induce on gravity, thermodynamic quantities as well as energetic ones, for a solar model and an Ap star. 3D MHD simulations allowed me to show that these equilibria form a first stable states family, the generalization of such states remaining an open question. It has been shown that a large-scale magnetic field confined in the solar radiation zone can induce an oblateness comparable to a high core rotation law. I also studied the secular interaction between the magnetic field, the differential rotation and the meridional circulation in the aim of implementing their effects in a next generation stellar evolution code. The influence of the magnetism on convection has also been studied. Finally, hydrodynamic processes responsible for the mixing have been compared with diffusion and a change of convection's efficiency in the case of a CoRoT star target. (author) [fr

  1. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  2. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  3. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel.

    Science.gov (United States)

    Yuan, Liming; Smith, Alex C

    2015-05-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect.

  4. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel

    Science.gov (United States)

    Yuan, Liming; Smith, Alex C.

    2015-01-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect. PMID:26190905

  5. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  6. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  7. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  8. A large-scale soil-structure interaction experiment: Design and construction

    International Nuclear Information System (INIS)

    Tang, H.T.; Tang, Y.K.; Stepp, J.C.; Wall, I.B.; Lin, E.; Cheng, S.C.; Lee, S.K.

    1989-01-01

    This paper describes the design and construction phase of the Large-Scale Soil-Structure Interaction Experiment project jointly sponsored by EPRI and Taipower. The project has two objectives: 1. to obtain an earthquake database which can be used to substantiate soil-structure interaction (SSI) models and analysis methods; and 2. to quantify nuclear power plant reactor containment and internal components seismic margin based on earthquake experience data. These objectives were accomplished by recording and analyzing data from two instrumented, scaled down, reinforced concrete containment structures during seismic events. The two model structures are sited in a high seismic region in Taiwan (SMART-1). A strong-motion seismic array network is located at the site. The containment models (1/4- and 1/12-scale) were constructed and instrumented specially for this experiment. Construction was completed and data recording began in September 1985. By November 1986, 18 strong motion earthquakes ranging from Richter magnitude 4.5 to 7.0 were recorded. (orig./HP)

  9. A method of orbital analysis for large-scale first-principles simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  10. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  11. Solution approach for a large scale personnel transport system for a large company in Latin America

    Energy Technology Data Exchange (ETDEWEB)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-07-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  12. Solution approach for a large scale personnel transport system for a large company in Latin America

    International Nuclear Information System (INIS)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-01-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  13. Solution approach for a large scale personnel transport system for a large company in Latin America

    Directory of Open Access Journals (Sweden)

    Eduardo-Arturo Garzón-Garnica

    2017-10-01

    Full Text Available Purpose: The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both.  When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  14. Large scale atmospheric tropical circulation changes and consequences during global warming

    International Nuclear Information System (INIS)

    Gastineau, G.

    2008-01-01

    The changes of the tropical large scale circulation during climate change can have large impacts on human activities. In a first part, the meridional atmospheric tropical circulation was studied in the different coupled models. During climate change, we find, on the one hand, that the Hadley meridional circulation and the subtropical jet are significantly shifted poleward, and on the other hand, that the intensity of the tropical circulation weakens. The slow down of the atmospheric circulation results from the dry static stability changes affecting the tropical troposphere. Secondly, idealized simulations are used to explain the tropical circulation changes. Ensemble simulation using the model LMDZ4 are set up to study the results from the coupled model IPSLCM4. The weakening of the large scale tropical circulation and the poleward shift of the Hadley cells are explained by both the uniform change and the meridional gradient change of the sea surface temperature. Then, we used the atmospheric model LMDZ4 in an aqua-planet configuration. The Hadley circulation changes are explained in a simple framework by the required poleward energy transport. In a last part, we focus on the water vapor distribution and feedback in the climate models. The Hadley circulation changes were shown to have a significant impact on the water vapour feedback during climate change. (author)

  15. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  16. The Saskatchewan River Basin - a large scale observatory for water security research (Invited)

    Science.gov (United States)

    Wheater, H. S.

    2013-12-01

    The 336,000 km2 Saskatchewan River Basin (SaskRB) in Western Canada illustrates many of the issues of Water Security faced world-wide. It poses globally-important science challenges due to the diversity in its hydro-climate and ecological zones. With one of the world's more extreme climates, it embodies environments of global significance, including the Rocky Mountains (source of the major rivers in Western Canada), the Boreal Forest (representing 30% of Canada's land area) and the Prairies (home to 80% of Canada's agriculture). Management concerns include: provision of water resources to more than three million inhabitants, including indigenous communities; balancing competing needs for water between different uses, such as urban centres, industry, agriculture, hydropower and environmental flows; issues of water allocation between upstream and downstream users in the three prairie provinces; managing the risks of flood and droughts; and assessing water quality impacts of discharges from major cities and intensive agricultural production. Superimposed on these issues is the need to understand and manage uncertain water futures, including effects of economic growth and environmental change, in a highly fragmented water governance environment. Key science questions focus on understanding and predicting the effects of land and water management and environmental change on water quantity and quality. To address the science challenges, observational data are necessary across multiple scales. This requires focussed research at intensively monitored sites and small watersheds to improve process understanding and fine-scale models. To understand large-scale effects on river flows and quality, land-atmosphere feedbacks, and regional climate, integrated monitoring, modelling and analysis is needed at large basin scale. And to support water management, new tools are needed for operational management and scenario-based planning that can be implemented across multiple scales and

  17. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land...... commands a higher wage than ‘formal’ large-scale agriculture, while rather different wage determination mechanisms exist in the two sectors. Human capital characteristics (education and experience) partly explain the differences in wages within the formal sector, but play no significant role...

  18. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  19. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  20. Participatory Design of Large-Scale Information Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    into a PD process model that (1) emphasizes PD experiments as transcending traditional prototyping by evaluating fully integrated systems exposed to real work practices; (2) incorporates improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extends initial...... design and development into a sustained and ongoing stepwise implementation that constitutes an overall technology-driven organizational change. The process model is presented through a largescale PD experiment in the Danish healthcare sector. We reflect on our experiences from this experiment......In this article we discuss how to engage in large-scale information systems development by applying a participatory design (PD) approach that acknowledges the unique situated work practices conducted by the domain experts of modern organizations. We reconstruct the iterative prototyping approach...

  1. Multi-Scale Models for the Scale Interaction of Organized Tropical Convection

    Science.gov (United States)

    Yang, Qiu

    Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.

  2. Model of large pool fires

    Energy Technology Data Exchange (ETDEWEB)

    Fay, J.A. [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)]. E-mail: jfay@mit.edu

    2006-08-21

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables.

  3. Model of large pool fires

    International Nuclear Information System (INIS)

    Fay, J.A.

    2006-01-01

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables

  4. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  5. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  6. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  7. Quasi-potential and Two-Scale Large Deviation Theory for Gillespie Dynamics

    KAUST Repository

    Li, Tiejun; Li, Fangting; Li, Xianggang; Lu, Cheng

    2016-01-01

    theory for Gillespie-type jump dynamics. In the application to a typical genetic switching model, the two-scale large deviation theory is developed to take into account the fast switching of DNA states. The comparison with other proposals are also

  8. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  9. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  10. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed; Elsawy, Hesham; Gharbieh, Mohammad; Alouini, Mohamed-Slim; Adinoyi, Abdulkareem; Alshaalan, Furaih

    2017-01-01

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end

  11. Decentralized State-Observer-Based Traffic Density Estimation of Large-Scale Urban Freeway Network by Dynamic Model

    Directory of Open Access Journals (Sweden)

    Yuqi Guo

    2017-08-01

    Full Text Available In order to estimate traffic densities in a large-scale urban freeway network in an accurate and timely fashion when traffic sensors do not cover the freeway network completely and thus only local measurement data can be utilized, this paper proposes a decentralized state observer approach based on a macroscopic traffic flow model. Firstly, by using the well-known cell transmission model (CTM, the urban freeway network is modeled in the way of distributed systems. Secondly, based on the model, a decentralized observer is designed. With the help of the Lyapunov function and S-procedure theory, the observer gains are computed by using linear matrix inequality (LMI technique. So, the traffic densities of the whole road network can be estimated by the designed observer. Finally, this method is applied to the outer ring of the Beijing’s second ring road and experimental results demonstrate the effectiveness and applicability of the proposed approach.

  12. Hierarchical formation of large scale structures of the Universe: observations and models

    International Nuclear Information System (INIS)

    Maurogordato, Sophie

    2003-01-01

    In this report for an Accreditation to Supervise Research (HDR), the author proposes an overview of her research works in cosmology. These works notably addressed the large scale distribution of the Universe (with constraints on the scenario of formation, and on the bias relationship, and the structuring of clusters), the analysis of galaxy clusters during coalescence, mass distribution within relaxed clusters [fr

  13. Mapping the distribution of the denitrifier community at large scales (Invited)

    Science.gov (United States)

    Philippot, L.; Bru, D.; Ramette, A.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.

    2010-12-01

    Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 740 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.

  14. Modeling of 3D Aluminum Polycrystals during Large Deformations

    International Nuclear Information System (INIS)

    Maniatty, Antoinette M.; Littlewood, David J.; Lu Jing; Pyle, Devin

    2007-01-01

    An approach for generating, meshing, and modeling 3D polycrystals, with a focus on aluminum alloys, subjected to large deformation processes is presented. A Potts type model is used to generate statistically representative grain structures with periodicity to allow scale-linking. The grain structures are compared to experimentally observed grain structures to validate that they are representative. A procedure for generating a geometric model from the voxel data is developed allowing for adaptive meshing of the generated grain structure. Material behavior is governed by an appropriate crystal, elasto-viscoplastic constitutive model. The elastic-viscoplastic model is implemented in a three-dimensional, finite deformation, mixed, finite element program. In order to handle the large-scale problems of interest, a parallel implementation is utilized. A multiscale procedure is used to link larger scale models of deformation processes to the polycrystal model, where periodic boundary conditions on the fluctuation field are enforced. Finite-element models, of 3D polycrystal grain structures will be presented along with observations made from these simulations

  15. What causes differences between national estimates of forest management carbon emissions and removals compared to estimates of large - scale models?

    NARCIS (Netherlands)

    Groen, T.A.; Verkerk, P.J.; Böttcher, H.; Grassi, G.; Cienciala, E.; Black, K.G.; Fortin, M.; Köthke, M.; Lehtonen, A.; Nabuurs, G.J; Petrova, L.; Blujdea, V.

    2013-01-01

    Under the United Nations Framework Convention for Climate Change all Parties have to report on carbon emissions and removals from the forestry sector. Each Party can use its own approach and country specific data for this. Independently, large-scale models exist (e.g. EFISCEN and G4M as used in this

  16. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  17. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  18. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Science.gov (United States)

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  19. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Directory of Open Access Journals (Sweden)

    Xianlei Dong

    Full Text Available Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  20. Hypersingular integral equations, waveguiding effects in Cantorian Universe and genesis of large scale structures

    International Nuclear Information System (INIS)

    Iovane, G.; Giordano, P.

    2005-01-01

    In this work we introduce the hypersingular integral equations and analyze a realistic model of gravitational waveguides on a cantorian space-time. A waveguiding effect is considered with respect to the large scale structure of the Universe, where the structure formation appears as if it were a classically self-similar random process at all astrophysical scales. The result is that it seems we live in an El Naschie's o (∞) Cantorian space-time, where gravitational lensing and waveguiding effects can explain the appearing Universe. In particular, we consider filamentary and planar large scale structures as possible refraction channels for electromagnetic radiation coming from cosmological structures. From this vision the Universe appears like a large self-similar adaptive mirrors set, thanks to three numerical simulations. Consequently, an infinite Universe is just an optical illusion that is produced by mirroring effects connected with the large scale structure of a finite and not a large Universe