WorldWideScience

Sample records for large-scale 4-regular grid

  1. Applying 4-regular grid structures in large-scale access networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas P.; Patel, Ahmed

    2006-01-01

    4-Regular grid structures have been used in multiprocessor systems for decades due to a number of nice properties with regard to routing, protection, and restoration, together with a straightforward planar layout. These qualities are to an increasing extent demanded also in largescale access...... networks, but concerning protection and restoration these demands have been met only to a limited extent by the commonly used ring and tree structures. To deal with the fact that classical 4-regular grid structures are not directly applicable in such networks, this paper proposes a number of extensions...... concerning restoration, protection, scalability, embeddability, flexibility, and cost. The extensions are presented as a tool case, which can be used for implementing semi-automatic and in the longer term full automatic network planning tools....

  2. On Hierarchical Extensions of Large-Scale 4-regular Grid Network Structures

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Patel, A.; Knudsen, Thomas Phillip

    It is studied how the introduction of ordered hierarchies in 4-regular grid network structures decreses distances remarkably, while at the same time allowing for simple topological routing schemes. Both meshes and tori are considered; in both cases non-hierarchical structures have power law depen...

  3. SQoS based Planning using 4-regular Grid for Optical Fiber Metworks

    DEFF Research Database (Denmark)

    Riaz, Muhammad Tahir; Pedersen, Jens Myrup; Madsen, Ole Brun

    optical fiber based network infrastructures. In the first step of SQoS based planning, this paper describes how 4-regular Grid structures can be implemented in the physical level of optical fiber network infrastructures. A systematic approach for implementing the Grid structure is presented. We used...

  4. SQoS based Planning using 4-regular Grid for Optical Fiber Networks

    DEFF Research Database (Denmark)

    Riaz, Muhammad Tahir; Pedersen, Jens Myrup; Madsen, Ole Brun

    2005-01-01

    optical fiber based network infrastructures. In the first step of SQoS based planning, this paper describes how 4-regular Grid structures can be implemented in the physical level of optical fiber network infrastructures. A systematic approach for implementing the Grid structure is presented. We used...

  5. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    Science.gov (United States)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  6. On Hierarchical Extensions of Large-Scale 4-regular Grid Network Structures

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Patel, A.; Knudsen, Thomas Phillip

    2004-01-01

    dependencies between the number of nodes and the distances in the structures. The perfect square mesh is introduced for hierarchies, and it is shown that applying ordered hierarchies in this way results in logarithmic dependencies between the number of nodes and the distances, resulting in better scaling...... structures. For example, in a mesh of 391876 nodes the average distance is reduced from 417.33 to 17.32 by adding hierarchical lines. This is gained by increasing the number of lines by 4.20% compared to the non-hierarchical structure. A similar hierarchical extension of the torus structure also results...

  7. Prospects for investment in large-scale, grid-connected solar power in Africa

    DEFF Research Database (Denmark)

    Hansen, Ulrich Elmer; Nygaard, Ivan; Pedersen, Mathilde Brix

    since the 1990s have changed the competiveness of solar PV in all markets, ranging from individual households via institutions to mini-grids and grid-connected installations. In volume and investment, the market for large-scale grid-connected solar power plants is by far the most important......-scale investments in grid-connected solar power plants and local assembly facilities for PV panels, have exceeded even optimistic scenarios. Finally, therefore, there seem to be bright prospects for investment in large-scale grid-connected solar power in Africa....

  8. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Science.gov (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  9. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  10. OffshoreDC DC grids for integration of large scale wind power

    DEFF Research Database (Denmark)

    Zeni, Lorenzo; Endegnanew, Atsede Gualu; Stamatiou, Georgios

    The present report summarizes the main findings of the Nordic Energy Research project “DC grids for large scale integration of offshore wind power – OffshoreDC”. The project is been funded by Nordic Energy Research through the TFI programme and was active between 2011 and 2016. The overall...... objective of the project was to drive the development of the VSC based HVDC technology for future large scale offshore grids, supporting a standardised and commercial development of the technology, and improving the opportunities for the technology to support power system integration of large scale offshore...

  11. On Line Segment Length and Mapping 4-regular Grid Structures in Network Infrastructures

    DEFF Research Database (Denmark)

    Riaz, Muhammad Tahir; Nielsen, Rasmus Hjorth; Pedersen, Jens Myrup

    2006-01-01

    The paper focuses on mapping the road network into 4-regular grid structures. A mapping algorithm is proposed. To model the road network GIS data have been used. The Geographic Information System (GIS) data for the road network are composed with different size of line segment lengths...

  12. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    Science.gov (United States)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  13. Voltage stability issues in a distribution grid with large scale PV plant

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Alvaro Ruiz; Marinopoulos, Antonios; Reza, Muhamad; Srivastava, Kailash [ABB AB, Vaesteraas (Sweden). Corporate Research Center; Hertem, Dirk van [Katholieke Univ. Leuven, Heverlee (Belgium). ESAT-ELECTA

    2011-07-01

    Solar photovoltaics (PV) has become a competitive renewable energy source. The production of solar PV cells and panels has increased significantly, while the cost is reduced due to economics of scale and technological achievements in the field. At the same time, the increase in efficiency of PV power systems and high energy prices are expected to lead PV systems to grid parity in the coming decade. This is expected to boost even more the large scale implementation of PV power plants (utility scale PV) and therefore the impact of such large scale PV plants to power system needs to be studies. This paper investigates the voltage stability issues arising from the connection of a large PV power plant to the power grid. For this purpose, a 15 MW PV power plant was implemented into a distribution grid, modeled and simulated using DIgSILENT Power Factory. Two scenarios were developed: in the first scenario, active power injected into the grid by the PV power plants was varied and the resulted U-Q curve was analyzed. In the second scenario, the impact of connecting PV power plants to different points in the grid - resulting in different strength of the connection - was investigated. (orig.)

  14. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  15. Evolutionary Hierarchical Multi-Criteria Metaheuristics for Scheduling in Large-Scale Grid Systems

    CERN Document Server

    Kołodziej, Joanna

    2012-01-01

    One of the most challenging issues in modelling today's large-scale computational systems is to effectively manage highly parametrised distributed environments such as computational grids, clouds, ad hoc networks and P2P networks. Next-generation computational grids must provide a wide range of services and high performance computing infrastructures. Various types of information and data processed in the large-scale dynamic grid environment may be incomplete, imprecise, and fragmented, which complicates the specification of proper evaluation criteria and which affects both the availability of resources and the final collective decisions of users. The complexity of grid architectures and grid management may also contribute towards higher energy consumption. All of these issues necessitate the development of intelligent resource management techniques, which are capable of capturing all of this complexity and optimising meaningful metrics for a wide range of grid applications.   This book covers hot topics in t...

  16. Accident of Large-scale Wind Turbines Disconnecting from Power Grid and Its Protection

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    There were many accidents of large-scale wind turbines disconnecting from power grid in 2011. As single- phase-to-ground fault cannot be correctly detected, single-phase-to-ground fault evolved to phase-to-phase fault. Phase-to-phase fault was isolated slowly, thus leading to low voltage. And wind turbines without enough low voltage ride-through capacity had to be disconnected from the grid. After some wind turbines being disconnected from the grid, overvoltage caused by reactive power surplus made more wind turbines disconnect from the grid. Based on the accident analysis, this paper presents solutions to above problems, including travelling waves based single-phase-to-ground protection, adaptive low voltage protection, integrated protection and control, and high impedance fault detection. The solutions lay foundations in theory and technology to prevent large-scale wind turbines disconnecting from the operating power grid.

  17. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...

  18. Large-scale visualization system for grid environment

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of Japan Atomic Energy Agency (CCSE/JAEA) has been conducting R and Ds of distributed computing (grid computing) environments: Seamless Thinking Aid (STA), Information Technology Based Laboratory (ITBL) and Atomic Energy Grid InfraStructure (AEGIS). In these R and Ds, we have developed the visualization technology suitable for the distributed computing environment. As one of the visualization tools, we have developed the Parallel Support Toolkit (PST) which can execute the visualization process parallely on a computer. Now, we improve PST to be executable simultaneously on multiple heterogeneous computers using Seamless Thinking Aid Message Passing Interface (STAMPI). STAMPI, we have developed in these R and Ds, is the MPI library executable on a heterogeneous computing environment. The improvement realizes the visualization of extremely large-scale data and enables more efficient visualization processes in a distributed computing environment. (author)

  19. 77 FR 58416 - Large Scale Networking (LSN); Middleware and Grid Interagency Coordination (MAGIC) Team

    Science.gov (United States)

    2012-09-20

    ..., Grid, and cloud projects. The MAGIC Team reports to the Large Scale Networking (LSN) Coordinating Group... Coordination (MAGIC) Team AGENCY: The Networking and Information Technology Research and Development (NITRD.... Dates/Location: The MAGIC Team meetings are held on the first Wednesday of each month, 2:00-4:00pm, at...

  20. Output Control Technologies for a Large-scale PV System Considering Impacts on a Power Grid

    Science.gov (United States)

    Kuwayama, Akira

    The mega-solar demonstration project named “Verification of Grid Stabilization with Large-scale PV Power Generation systems” had been completed in March 2011 at Wakkanai, the northernmost city of Japan. The major objectives of this project were to evaluate adverse impacts of large-scale PV power generation systems connected to the power grid and develop output control technologies with integrated battery storage system. This paper describes the outline and results of this project. These results show the effectiveness of battery storage system and also proposed output control methods for a large-scale PV system to ensure stable operation of power grids. NEDO, New Energy and Industrial Technology Development Organization of Japan conducted this project and HEPCO, Hokkaido Electric Power Co., Inc managed the overall project.

  1. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    Directory of Open Access Journals (Sweden)

    Sam Ali Al

    2015-01-01

    Full Text Available The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simulations and Direct Numerical Simulations data regardless the Sub Grid Scale models. However, the agreement is less satisfactory with relatively coarse grid without using any wall models and the differences between Sub Grid Scale models are distinguishable. Using local wall model retuned the basic flow topology and reduced significantly the differences between the coarse meshed Large-Eddy Simulations and Direct Numerical Simulations data. The results show that the ability of local wall model to predict the separation zone depends strongly on its implementation way.

  2. Numerical aspects of drift kinetic turbulence: Ill-posedness, regularization and a priori estimates of sub-grid-scale terms

    KAUST Repository

    Samtaney, Ravi

    2012-01-01

    We present a numerical method based on an Eulerian approach to solve the Vlasov-Poisson system for 4D drift kinetic turbulence. Our numerical approach uses a conservative formulation with high-order (fourth and higher) evaluation of the numerical fluxes coupled with a fourth-order accurate Poisson solver. The fluxes are computed using a low-dissipation high-order upwind differencing method or a tuned high-resolution finite difference method with no numerical dissipation. Numerical results are presented for the case of imposed ion temperature and density gradients. Different forms of controlled regularization to achieve a well-posed system are used to obtain convergent resolved simulations. The regularization of the equations is achieved by means of a simple collisional model, by inclusion of an ad-hoc hyperviscosity or artificial viscosity term or by implicit dissipation in upwind schemes. Comparisons between the various methods and regularizations are presented. We apply a filtering formalism to the Vlasov equation and derive sub-grid-scale (SGS) terms analogous to the Reynolds stress terms in hydrodynamic turbulence. We present a priori quantifications of these SGS terms in resolved simulations of drift-kinetic turbulence by applying a sharp filter. © 2012 IOP Publishing Ltd.

  3. Numerical aspects of drift kinetic turbulence: ill-posedness, regularization and a priori estimates of sub-grid-scale terms

    International Nuclear Information System (INIS)

    Samtaney, Ravi

    2012-01-01

    We present a numerical method based on an Eulerian approach to solve the Vlasov-Poisson system for 4D drift kinetic turbulence. Our numerical approach uses a conservative formulation with high-order (fourth and higher) evaluation of the numerical fluxes coupled with a fourth-order accurate Poisson solver. The fluxes are computed using a low-dissipation high-order upwind differencing method or a tuned high-resolution finite difference method with no numerical dissipation. Numerical results are presented for the case of imposed ion temperature and density gradients. Different forms of controlled regularization to achieve a well-posed system are used to obtain convergent resolved simulations. The regularization of the equations is achieved by means of a simple collisional model, by inclusion of an ad-hoc hyperviscosity or artificial viscosity term or by implicit dissipation in upwind schemes. Comparisons between the various methods and regularizations are presented. We apply a filtering formalism to the Vlasov equation and derive sub-grid-scale (SGS) terms analogous to the Reynolds stress terms in hydrodynamic turbulence. We present a priori quantifications of these SGS terms in resolved simulations of drift-kinetic turbulence by applying a sharp filter.

  4. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  5. Security and VO management capabilities in a large-scale Grid operating system

    OpenAIRE

    Aziz, Benjamin; Sporea, Ioana

    2014-01-01

    This paper presents a number of security and VO management capabilities in a large-scale distributed Grid operating system. The capabilities formed the basis of the design and implementation of a number of security and VO management services in the system. The main aim of the paper is to provide some idea of the various functionality cases that need to be considered when designing similar large-scale systems in the future.

  6. Comparing the Reliability of Regular Topologies on a Backbone Network. A Case Study

    DEFF Research Database (Denmark)

    Cecilio, Sergio Labeage; Gutierrez Lopez, Jose Manuel; Riaz, M. Tahir

    2009-01-01

    The aim of this paper is to compare the reliability of regular topologies on a backbone network. The study is focused on a large-scale fiberoptic network. Different regular topological solutions as single ring, double ring or 4-Regular grid are applied to the case study, and compared in terms...

  7. Analysis for Large Scale Integration of Electric Vehicles into Power Grids

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Wang, Xiaoru

    2011-01-01

    Electric Vehicles (EVs) provide a significant opportunity for reducing the consumption of fossil energies and the emission of carbon dioxide. With more and more electric vehicles integrated in the power systems, it becomes important to study the effects of EV integration on the power systems......, especially the low and middle voltage level networks. In the paper, the basic structure and characteristics of the electric vehicles are introduced. The possible impacts of large scale integration of electric vehicles on the power systems especially the advantage to the integration of the renewable energies...... are discussed. Finally, the research projects related to the large scale integration of electric vehicles into the power systems are introduced, it will provide reference for large scale integration of Electric Vehicles into power grids....

  8. Less is more: regularization perspectives on large scale machine learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  9. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  10. Grid Support in Large Scale PV Power Plants using Active Power Reserves

    DEFF Research Database (Denmark)

    Craciun, Bogdan-Ionut

    to validate the performance of the frequency support functions, a flexible grid model with IEEE 12 bus system characteristics has been developed and implemented in RTDS. A power hardware-in-the-loop (PHIL) system composed by 20 kW plant (2 x 10 kW inverters and PV linear simulator) and grid simulator (RTDS......Photovoltaic (PV) systems are in the 3rd place in the renewable energy market, after hydro and wind power. The increased penetration of PV within the electrical power system has led to stability issues of the entire grid in terms of its reliability, availability and security of the supply....... As a consequence, Large scale PV Power Plants (LPVPPs) operating in Maximum Power Point (MPP) are not supporting the electrical network, since several grid triggering events or the increased number of downward regulation procedures have forced European Network of Transmission System Operators for Electricity...

  11. Research on the impacts of large-scale electric vehicles integration into power grid

    Science.gov (United States)

    Su, Chuankun; Zhang, Jian

    2018-06-01

    Because of its special energy driving mode, electric vehicles can improve the efficiency of energy utilization and reduce the pollution to the environment, which is being paid more and more attention. But the charging behavior of electric vehicles is random and intermittent. If the electric vehicle is disordered charging in a large scale, it causes great pressure on the structure and operation of the power grid and affects the safety and economic operation of the power grid. With the development of V2G technology in electric vehicle, the study of the charging and discharging characteristics of electric vehicles is of great significance for improving the safe operation of the power grid and the efficiency of energy utilization.

  12. Review of DC System Technologies for Large Scale Integration of Wind Energy Systems with Electricity Grids

    Directory of Open Access Journals (Sweden)

    Sheng Jie Shao

    2010-06-01

    Full Text Available The ever increasing development and availability of power electronic systems is the underpinning technology that enables large scale integration of wind generation plants with the electricity grid. As the size and power capacity of the wind turbine continues to increase, so is the need to place these significantly large structures at off-shore locations. DC grids and associated power transmission technologies provide opportunities for cost reduction and electricity grid impact minimization as the bulk power is concentrated at single point of entry. As a result, planning, optimization and impact can be studied and carefully controlled minimizing the risk of the investment as well as power system stability issues. This paper discusses the key technologies associated with DC grids for offshore wind farm applications.

  13. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    OpenAIRE

    Sam Ali Al; Szasz Robert; Revstedt Johan

    2015-01-01

    The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simu...

  14. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    Science.gov (United States)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  15. Delta-Connected Cascaded H-Bridge Multilevel Converters for Large-Scale Photovoltaic Grid Integration

    DEFF Research Database (Denmark)

    Yu, Yifan; Konstantinou, Georgios; Townsend, Christopher D.

    2017-01-01

    The cascaded H-bridge (CHB) converter is becoming a promising candidate for use in next generation large-scale photovoltaic (PV) power plants. However, solar power generation in the three converter phase-legs can be significantly unbalanced, especially in a large geographically-dispersed plant....... The power imbalance between the three phases defines a limit for the injection of balanced three-phase currents to the grid. This paper quantifies the performance of, and experimentally confirms, the recently proposed delta-connected CHB converter for PV applications as an alternative configuration...... for large-scale PV power plants. The required voltage and current overrating for the converter is analytically developed and compared against the star-connected counterpart. It is shown that the delta-connected CHB converter extends the balancing capabilities of the star-connected CHB and can accommodate...

  16. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    Science.gov (United States)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  17. H1 Grid production tool for large scale Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lobodzinski, B; Wissing, Ch [DESY, Hamburg (Germany); Bystritskaya, E; Vorobiew, M [ITEP, Moscow (Russian Federation); Karbach, T M [University of Dortmund (Germany); Mitsyn, S [JINR, Moscow (Russian Federation); Mudrinic, M, E-mail: bogdan.lobodzinski@desy.d [VINS, Belgrad (Serbia)

    2010-04-01

    The H1 Collaboration at HERA has entered the period of high precision analyses based on the final data sample. These analyses require a massive production of simulated Monte Carlo (MC) events. The H1 MC framework (H1MC) is a software for mass MC production on the LCG Grid infrastructure and on a local batch system created by H1 Collaboration. The aim of the tool is a full automatisation of the MC production workflow including management of the MC jobs on the Grid down to copying of the resulting files from the Grid to the H1 mass storage tape device. The H1 MC framework has modular structure, delegating a specific task to each module, including task specific to the H1 experiment: Automatic building of steer and input files, simulation of the H1 detector, reconstruction of particle tracks and post processing calculation. Each module provides data or functionality needed by other modules via a local database. The Grid jobs created for detector simulation and reconstruction from generated MC input files are fully independent and fault-tolerant for 32 and 64-bit LCG Grid architecture and in Grid running state they can be continuously monitored using Relational Grid Monitoring Architecture (R-GMA) service. To monitor the full production chain and detect potential problems, regular checks of the job state are performed using the local database and the Service Availability Monitoring (SAM) framework. The improved stability of the system has resulted in a dramatic increase in the production rate, which exceeded two billion MC events in 2008.

  18. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  19. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    Science.gov (United States)

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  20. Tidal-induced large-scale regular bed form patterns in a three-dimensional shallow water model

    NARCIS (Netherlands)

    Hulscher, Suzanne J.M.H.

    1996-01-01

    The three-dimensional model presented in this paper is used to study how tidal currents form wave-like bottom patterns. Inclusion of vertical flow structure turns out to be necessary to describe the formation, or absence, of all known large-scale regular bottom features. The tide and topography are

  1. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  2. Mixing in 3D Sparse Multi-Scale Grid Generated Turbulence

    Science.gov (United States)

    Usama, Syed; Kopec, Jacek; Tellez, Jackson; Kwiatkowski, Kamil; Redondo, Jose; Malik, Nadeem

    2017-04-01

    Flat 2D fractal grids are known to alter turbulence characteristics downstream of the grid as compared to the regular grids with the same blockage ratio and the same mass inflow rates [1]. This has excited interest in the turbulence community for possible exploitation for enhanced mixing and related applications. Recently, a new 3D multi-scale grid design has been proposed [2] such that each generation of length scale of turbulence grid elements is held in its own frame, the overall effect is a 3D co-planar arrangement of grid elements. This produces a 'sparse' grid system whereby each generation of grid elements produces a turbulent wake pattern that interacts with the other wake patterns downstream. A critical motivation here is that the effective blockage ratio in the 3D Sparse Grid Turbulence (3DSGT) design is significantly lower than in the flat 2D counterpart - typically the blockage ratio could be reduced from say 20% in 2D down to 4% in the 3DSGT. If this idea can be realized in practice, it could potentially greatly enhance the efficiency of turbulent mixing and transfer processes clearly having many possible applications. Work has begun on the 3DSGT experimentally using Surface Flow Image Velocimetry (SFIV) [3] at the European facility in the Max Planck Institute for Dynamics and Self-Organization located in Gottingen, Germany and also at the Technical University of Catalonia (UPC) in Spain, and numerically using Direct Numerical Simulation (DNS) at King Fahd University of Petroleum & Minerals (KFUPM) in Saudi Arabia and in University of Warsaw in Poland. DNS is the most useful method to compare the experimental results with, and we are studying different types of codes such as Imcompact3d, and OpenFoam. Many variables will eventually be investigated for optimal mixing conditions. For example, the number of scale generations, the spacing between frames, the size ratio of grid elements, inflow conditions, etc. We will report upon the first set of findings

  3. Regularity for a clamped grid equation $u_{xxxx}+u_{yyyy}=f $ on a domain with a corner

    Directory of Open Access Journals (Sweden)

    Tymofiy Gerasimov

    2009-04-01

    Full Text Available The operator $L=frac{partial ^{4}}{partial x^{4}} +frac{partial ^{4}}{partial y^{4}}$ appears in a model for the vertical displacement of a two-dimensional grid that consists of two perpendicular sets of elastic fibers or rods. We are interested in the behaviour of such a grid that is clamped at the boundary and more specifically near a corner of the domain. Kondratiev supplied the appropriate setting in the sense of Sobolev type spaces tailored to find the optimal regularity. Inspired by the Laplacian and the Bilaplacian models one expect, except maybe for some special angles that the optimal regularity improves when angle decreases. For the homogeneous Dirichlet problem with this special non-isotropic fourth order operator such a result does not hold true. We will show the existence of an interval $( frac{1}{2}pi ,omega _{star }$, $omega _{star }/pi approx 0.528dots$ (in degrees $omega _{star }approx 95.1dots^{circ} $, in which the optimal regularity improves with increasing opening angle.

  4. Manifestly scale-invariant regularization and quantum effective operators

    CERN Document Server

    Ghilencea, D.M.

    2016-01-01

    Scale invariant theories are often used to address the hierarchy problem, however the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which break this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale invariant regularization in (classical) scale invariant theories. We use a dilaton-dependent subtraction function $\\mu(\\sigma)$ which after spontaneous breaking of scale symmetry generates the usual DR subtraction scale $\\mu(\\langle\\sigma\\rangle)$. One consequence is that "evanescent" interactions generated by scale invariance of the action in $d=4-2\\epsilon$ (but vanishing in $d=4$), give rise to new, finite quantum corrections. We find a (finite) correction $\\Delta U(\\phi,\\sigma)$ to the one-loop scalar potential for $\\phi$ and $\\sigma$, beyond the Coleman-Weinberg term. $\\Delta U$ is due to an evanescent correction ($\\propto\\epsilon$) to the field-dependent masses (of...

  5. Vehicle-to-grid power implementation: From stabilizing the grid to supporting large-scale renewable energy

    Science.gov (United States)

    Kempton, Willett; Tomić, Jasna

    Vehicle-to-grid power (V2G) uses electric-drive vehicles (battery, fuel cell, or hybrid) to provide power for specific electric markets. This article examines the systems and processes needed to tap energy in vehicles and implement V2G. It quantitatively compares today's light vehicle fleet with the electric power system. The vehicle fleet has 20 times the power capacity, less than one-tenth the utilization, and one-tenth the capital cost per prime mover kW. Conversely, utility generators have 10-50 times longer operating life and lower operating costs per kWh. To tap V2G is to synergistically use these complementary strengths and to reconcile the complementary needs of the driver and grid manager. This article suggests strategies and business models for doing so, and the steps necessary for the implementation of V2G. After the initial high-value, V2G markets saturate and production costs drop, V2G can provide storage for renewable energy generation. Our calculations suggest that V2G could stabilize large-scale (one-half of US electricity) wind power with 3% of the fleet dedicated to regulation for wind, plus 8-38% of the fleet providing operating reserves or storage for wind. Jurisdictions more likely to take the lead in adopting V2G are identified.

  6. Provably optimal parallel transport sweeps on regular grids

    International Nuclear Information System (INIS)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D.; Smith, T.; Rauchwerger, L.; Amato, N. M.; Bailey, T. S.; Falgout, R. D.

    2013-01-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P x x P y x P z partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10 6 . (authors)

  7. Provably optimal parallel transport sweeps on regular grids

    Energy Technology Data Exchange (ETDEWEB)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D. [Dept. of Nuclear Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Smith, T.; Rauchwerger, L.; Amato, N. M. [Dept. of Computer Science and Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Bailey, T. S.; Falgout, R. D. [Lawrence Livermore National Laboratory (United States)

    2013-07-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P{sub x} x P{sub y} x P{sub z} partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10{sup 6}. (authors)

  8. Quantum implications of a scale invariant regularization

    Science.gov (United States)

    Ghilencea, D. M.

    2018-04-01

    We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).

  9. Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle

    Science.gov (United States)

    Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat

    1993-01-01

    The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.

  10. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  11. Large Scale Monte Carlo Simulation of Neutrino Interactions Using the Open Science Grid and Commercial Clouds

    International Nuclear Information System (INIS)

    Norman, A.; Boyd, J.; Davies, G.; Flumerfelt, E.; Herner, K.; Mayer, N.; Mhashilhar, P.; Tamsett, M.; Timm, S.

    2015-01-01

    Modern long baseline neutrino experiments like the NOvA experiment at Fermilab, require large scale, compute intensive simulations of their neutrino beam fluxes and backgrounds induced by cosmic rays. The amount of simulation required to keep the systematic uncertainties in the simulation from dominating the final physics results is often 10x to 100x that of the actual detector exposure. For the first physics results from NOvA this has meant the simulation of more than 2 billion cosmic ray events in the far detector and more than 200 million NuMI beam spill simulations. Performing these high statistics levels of simulation have been made possible for NOvA through the use of the Open Science Grid and through large scale runs on commercial clouds like Amazon EC2. We details the challenges in performing large scale simulation in these environments and how the computing infrastructure for the NOvA experiment has been adapted to seamlessly support the running of different simulation and data processing tasks on these resources. (paper)

  12. Five hundred years of gridded high-resolution precipitation reconstructions over Europe and the connection to large-scale circulation

    Energy Technology Data Exchange (ETDEWEB)

    Pauling, Andreas [University of Bern, Institute of Geography, Bern (Switzerland); Luterbacher, Juerg; Wanner, Heinz [University of Bern, Institute of Geography, Bern (Switzerland); National Center of Competence in Research (NCCR) in Climate, Bern (Switzerland); Casty, Carlo [University of Bern, Climate and Environmental Physics Institute, Bern (Switzerland)

    2006-03-15

    We present seasonal precipitation reconstructions for European land areas (30 W to 40 E/30-71 N; given on a 0.5 x 0.5 resolved grid) covering the period 1500-1900 together with gridded reanalysis from 1901 to 2000 (Mitchell and Jones 2005). Principal component regression techniques were applied to develop this dataset. A large variety of long instrumental precipitation series, precipitation indices based on documentary evidence and natural proxies (tree-ring chronologies, ice cores, corals and a speleothem) that are sensitive to precipitation signals were used as predictors. Transfer functions were derived over the 1901-1983 calibration period and applied to 1500-1900 in order to reconstruct the large-scale precipitation fields over Europe. The performance (quality estimation based on unresolved variance within the calibration period) of the reconstructions varies over centuries, seasons and space. Highest reconstructive skill was found for winter over central Europe and the Iberian Peninsula. Precipitation variability over the last half millennium reveals both large interannual and decadal fluctuations. Applying running correlations, we found major non-stationarities in the relation between large-scale circulation and regional precipitation. For several periods during the last 500 years, we identified key atmospheric modes for southern Spain/northern Morocco and central Europe as representations of two precipitation regimes. Using scaled composite analysis, we show that precipitation extremes over central Europe and southern Spain are linked to distinct pressure patterns. Due to its high spatial and temporal resolution, this dataset allows detailed studies of regional precipitation variability for all seasons, impact studies on different time and space scales, comparisons with high-resolution climate models as well as analysis of connections with regional temperature reconstructions. (orig.)

  13. Analytical Assessment of the Relationship between 100MWp Large-scale Grid-connected Photovoltaic Plant Performance and Meteorological Parameters

    Science.gov (United States)

    Sheng, Jie; Zhu, Qiaoming; Cao, Shijie; You, Yang

    2017-05-01

    This paper helps in study of the relationship between the photovoltaic power generation of large scale “fishing and PV complementary” grid-tied photovoltaic system and meteorological parameters, with multi-time scale power data from the photovoltaic power station and meteorological data over the same period of a whole year. The result indicates that, the PV power generation has the most significant correlation with global solar irradiation, followed by diurnal temperature range, sunshine hours, daily maximum temperature and daily average temperature. In different months, the maximum monthly average power generation appears in August, which related to the more global solar irradiation and longer sunshine hours in this month. However, the maximum daily average power generation appears in October, this is due to the drop in temperature brings about the improvement of the efficiency of PV panels. Through the contrast of monthly average performance ratio (PR) and monthly average temperature, it is shown that, the larger values of monthly average PR appears in April and October, while it is smaller in summer with higher temperature. The results concluded that temperature has a great influence on the performance ratio of large scale grid-tied PV power system, and it is important to adopt effective measures to decrease the temperature of PV plant properly.

  14. Integration of Large-scale Consumers in Smart Grid

    OpenAIRE

    Rahnama, Samira

    2015-01-01

    A prominent feature of the smart grid is to involve the consumer side in balancing effort, rather than placing the entire burden of maintaining this balance on the producers. This thesis investigates the utilization of flexible consumers in the future smart grid. The focus of this work is on industrial consumers. We propose a three-level hierarchical control framework, in which a so-called “Aggregator” is located between a number of flexible industrial demands and a grid operator. The aggrega...

  15. Literature Review on Reasons and Countermeasures on Large-scale Off-grid of Wind Turbine Generator System

    Directory of Open Access Journals (Sweden)

    Zhu Jun

    2015-01-01

    Full Text Available This paper reviews the present situation of the application of wind turbines generator system(WTGS at home and abroad, describes the strategic significance and the value of sustainable development of the wind power in the country, illustrates the problems, a variety of reasons and responses on large-scale off-grid of WTGS, compares the advantages and disadvantages of various methods, gives full consideration to the actual demand for WTGS works and characteristics and points out the further research.

  16. Ecogrid EU: a large scale smart grids demonstration of real time market-based integration of numerous small der and DR

    NARCIS (Netherlands)

    Ding, Y.; Nyeng, P.; Ostergaard, J.; Trong, M.D.; Pineda, S.; Kok, K.; Huitema, G.B.; Grande, O.S.

    2012-01-01

    This paper provides an overview of the Ecogrid EU project, which is a large-scale demonstration project on the Danish island Bornholm. It provides Europe a fast track evolution towards smart grid dissemination and deployment in the distribution network. Objective of Ecogrid EU is to illustrate that

  17. SuperGrid or SmartGrid: Competing strategies for large-scale integration of intermittent renewables?

    International Nuclear Information System (INIS)

    Blarke, Morten B.; Jenkins, Bryan M.

    2013-01-01

    This paper defines and compares two strategies for integrating intermittent renewables: SuperGrid and SmartGrid. While conventional energy policy suggests that these strategies may be implemented alongside each other, the paper identifies significant technological and socio-economic conflicts of interest between the two. The article identifies differences between a domestic strategy for the integration of intermittent renewables, vis-à-vis the SmartGrid, and a cross-system strategy, vis-à-vis the SuperGrid. Policy makers and transmission system operators must understand the need for both strategies to evolve in parallel, but in different territories, or with strategic integration, avoiding for one strategy to undermine the feasibility of the other. A strategic zoning strategy is introduced from which attentive societies as well as the global community stand to benefit. The analysis includes a paradigmatic case study from West Denmark which supports the hypothesis that these strategies are mutually exclusive. The case study shows that increasing cross-system transmission capacity jeopardizes the feasibility of SmartGrid technology investments. A political effort is required for establishing dedicated SmartGrid innovation zones, while also redefining infrastructure to avoid the narrow focus on grids and cables. SmartGrid Investment Trusts could be supported from reallocation of planned transmission grid investments to provide for the equitable development of SmartGrid strategies. - Highlights: • Compares SuperGrid and SmartGrid strategies for integrating intermittent renewables. • Identifies technological and socio-economic conflicts of interest between the two. • Proposes a strategic zoning strategy allowing for both strategies to evolve. • Presents a paradigmatic case study showing that strategies are mutually exclusive. • Proposes dedicated SmartGrid innovation zones and SmartGrid investment trusts

  18. Grid matching of large-scale wind energy conversion systems, alone and in tandem with large-scale photovoltaic systems: An Israeli case study

    International Nuclear Information System (INIS)

    Solomon, A.A.; Faiman, D.; Meron, G.

    2010-01-01

    This paper presents a grid matching analysis of wind energy conversion systems (WECSs) and photovoltaic (PV)-WECS hybrid systems. The study was carried out using hourly load data of the Israel Electric Corporation (IEC) for the year 2006 and the corresponding simulated hourly performance of large PV and WECS plants in the Negev Desert. Our major objective was to compare the grid-matching capabilities of wind with those of our previously published PV results, and to assess the extent to which the combined employment of WECS and PV can improve the grid matching capability of either technology when used on its own. We find that, due to the differences in diurnal and seasonal output profiles of WECS and PV, their tandem employment significantly improves grid penetration compared to their use individually.

  19. ITO with embedded silver grids as transparent conductive electrodes for large area organic solar cells

    DEFF Research Database (Denmark)

    Patil, Bhushan Ramesh; Mirsafaei, Mina; Cielecki, Pawel Piotr

    2017-01-01

    In this work, development of semi-transparent electrodes for efficient large area organic solar cells (OSCs) has been demonstrated. Electron beam evaporated silver grids were embedded in commercially available ITO coatings on glass, through a standard negative photolithography process, in order...... patterns. Solution processed bulk heterojunction OSCs based on PTB7:[70]PCBM were fabricated on top of these electrodes with cell areas of 4.38 cm2, and the performance of these OSCs was compared to reference cells fabricated on pure ITO electrodes. The Fill Factor of the large-scale OSCs fabricated on ITO...... with embedded Ag grids was enhanced by 18 % for the line grids pattern and 30 % for the square grids pattern compared to that of the reference OSCs. The increase in the Fill Factor was directly correlated to the decrease in the series resistance of the OSCs. The maximum power conversion efficiency (PCE...

  20. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  1. Energy transfers in large-scale and small-scale dynamos

    Science.gov (United States)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  2. Integration of Large-scale Consumers in Smart Grid

    DEFF Research Database (Denmark)

    Rahnama, Samira

    A prominent feature of the smart grid is to involve the consumer side in balancing effort, rather than placing the entire burden of maintaining this balance on the producers. This thesis investigates the utilization of flexible consumers in the future smart grid. The focus of this work is on indu......A prominent feature of the smart grid is to involve the consumer side in balancing effort, rather than placing the entire burden of maintaining this balance on the producers. This thesis investigates the utilization of flexible consumers in the future smart grid. The focus of this work...... the demand that these consumers represent. The exact responsibility of the aggregator, however, can vary depending on several factors such as control strategies, demand types, provided services etc. This thesis addresses the aggregator design for a specific class of consumers. The work involves selecting...... an appropriate control scenario, formulating the optimal objective function at the aggregator, modeling the flexibility of our specific case studies and determining the required information flow. This thesis also investigates different types of aggregation, when we have different types of consumers...

  3. A Systematic Multi-Time Scale Solution for Regional Power Grid Operation

    Science.gov (United States)

    Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.

    2017-10-01

    Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.

  4. Stochastically Estimating Modular Criticality in Large-Scale Logic Circuits Using Sparsity Regularization and Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Mohammed Alawad

    2015-03-01

    Full Text Available This paper considers the problem of how to efficiently measure a large and complex information field with optimally few observations. Specifically, we investigate how to stochastically estimate modular criticality values in a large-scale digital circuit with a very limited number of measurements in order to minimize the total measurement efforts and time. We prove that, through sparsity-promoting transform domain regularization and by strategically integrating compressive sensing with Bayesian learning, more than 98% of the overall measurement accuracy can be achieved with fewer than 10% of measurements as required in a conventional approach that uses exhaustive measurements. Furthermore, we illustrate that the obtained criticality results can be utilized to selectively fortify large-scale digital circuits for operation with narrow voltage headrooms and in the presence of soft-errors rising at near threshold voltage levels, without excessive hardware overheads. Our numerical simulation results have shown that, by optimally allocating only 10% circuit redundancy, for some large-scale benchmark circuits, we can achieve more than a three-times reduction in its overall error probability, whereas if randomly distributing such 10% hardware resource, less than 2% improvements in the target circuit’s overall robustness will be observed. Finally, we conjecture that our proposed approach can be readily applied to estimate other essential properties of digital circuits that are critical to designing and analyzing them, such as the observability measure in reliability analysis and the path delay estimation in stochastic timing analysis. The only key requirement of our proposed methodology is that these global information fields exhibit a certain degree of smoothness, which is universally true for almost any physical phenomenon.

  5. Optimal charging scheduling for large-scale EV (electric vehicle) deployment based on the interaction of the smart-grid and intelligent-transport systems

    International Nuclear Information System (INIS)

    Luo, Yugong; Zhu, Tao; Wan, Shuang; Zhang, Shuwei; Li, Keqiang

    2016-01-01

    The widespread use of electric vehicles (EVs) is becoming an imminent trend. Research has been done on the scheduling of EVs from the perspective of the charging characteristic, improvement in the safety and economy of the power grid, or the traffic jams in the transport system caused by a large number of EVs driven to charging stations. There is a lack of systematic studies considering EVs, the power grid, and the transport system all together. In this paper, a novel optimal charging scheduling strategy for different types of EVs is proposed based on not only transport system information, such as road length, vehicle velocity and waiting time, but also grid system information, such as load deviation and node voltage. In addition, a charging scheduling simulation platform suitable for large-scale EV deployment is developed based on actual charging scenarios. The simulation results show that the improvements in both the transport system efficiency and the grid system operation can be obtained by using the optimal strategy, such as the node voltage drop is decreased, the power loss is reduced, and the load curve is optimized. - Highlights: • A novel optimal charging scheduling strategy is proposed for different electric vehicles (EVs). • A simulation platform suitable for large-scale EV deployment is established. • The traffic congestion near the charging and battery-switch stations is relieved. • The safety and economy problems of the distribution network are solved. • The peak-to-valley load of the distribution system is reduced.

  6. ITO with embedded silver grids as transparent conductive electrodes for large area organic solar cells

    Science.gov (United States)

    Patil, Bhushan R.; Mirsafaei, Mina; Piotr Cielecki, Paweł; Fernandes Cauduro, André Luis; Fiutowski, Jacek; Rubahn, Horst-Günter; Madsen, Morten

    2017-10-01

    In this work, development of semi-transparent electrodes for efficient large area organic solar cells (OSCs) has been demonstrated. Electron beam evaporated silver grids were embedded in commercially available ITO coatings on glass, through a standard negative photolithography process, in order to improve the conductivity of planar ITO substrates. The fabricated electrodes with embedded line and square patterned Ag grids reduced the sheet resistance of ITO by 25% and 40%, respectively, showing optical transmittance drops of less than 6% within the complete visible light spectrum for both patterns. Solution processed bulk heterojunction OSCs based on PTB7:[70]PCBM were fabricated on top of these electrodes with cell areas of 4.38 cm2, and the performance of these OSCs was compared to reference cells fabricated on pure ITO electrodes. The Fill Factor (FF) of the large-scale OSCs fabricated on ITO with embedded Ag grids was enhanced by 18% for the line grids pattern and 30% for the square grids pattern compared to that of the reference OSCs. The increase in the FF was directly correlated to the decrease in the series resistance of the OSCs. The maximum power conversion efficiency (PCE) of the OSCs was measured to be 4.34%, which is 23% higher than the PCE of the reference OSCs. As the presented method does not involve high temperature processing, it could be considered a general approach for development of large area organic electronics on solvent resistant, flexible substrates.

  7. SuperGrid or SmartGrid: Competing strategies for large-scale integration of intermittent renewables?

    DEFF Research Database (Denmark)

    Blarke, Morten; M. Jenkins, Bryan

    2013-01-01

    This paper defines and compares two strategies for integrating intermittent renewables: SuperGrid and SmartGrid. While conventional energy policy suggests that these strategies may be implemented alongside each other, the paper identifies significant technological and socio-economic conflicts...... of interest between the two. The article identifies differences between a domestic strategy for the integration of intermittent renewables, vis-à-vis the SmartGrid, and a cross-system strategy, vis-à-vis the SuperGrid. Policy makers and transmission system operators must understand the need for both...... a paradigmatic case study from West Denmark which supports the hypothesis that these strategies are mutually exclusive. The case study shows that increasing cross-system transmission capacity jeopardizes the feasibility of SmartGrid technology investments. A political effort is required for establishing...

  8. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  9. A manganese-hydrogen battery with potential for grid-scale energy storage

    Science.gov (United States)

    Chen, Wei; Li, Guodong; Pei, Allen; Li, Yuzhang; Liao, Lei; Wang, Hongxia; Wan, Jiayu; Liang, Zheng; Chen, Guangxu; Zhang, Hao; Wang, Jiangyan; Cui, Yi

    2018-05-01

    Batteries including lithium-ion, lead-acid, redox-flow and liquid-metal batteries show promise for grid-scale storage, but they are still far from meeting the grid's storage needs such as low cost, long cycle life, reliable safety and reasonable energy density for cost and footprint reduction. Here, we report a rechargeable manganese-hydrogen battery, where the cathode is cycled between soluble Mn2+ and solid MnO2 with a two-electron reaction, and the anode is cycled between H2 gas and H2O through well-known catalytic reactions of hydrogen evolution and oxidation. This battery chemistry exhibits a discharge voltage of 1.3 V, a rate capability of 100 mA cm-2 (36 s of discharge) and a lifetime of more than 10,000 cycles without decay. We achieve a gravimetric energy density of 139 Wh kg-1 (volumetric energy density of 210 Wh l-1), with the theoretical gravimetric energy density of 174 Wh kg-1 (volumetric energy density of 263 Wh l-1) in a 4 M MnSO4 electrolyte. The manganese-hydrogen battery involves low-cost abundant materials and has the potential to be scaled up for large-scale energy storage.

  10. Adapting AC Lines to DC Grids for Large-Scale Renewable Power Transmission

    Directory of Open Access Journals (Sweden)

    D. Marene Larruskain

    2014-10-01

    Full Text Available All over the world, governments of different countries are nowadays promoting the use of clean energies in order to achieve sustainable energy systems. In this scenario, since the installed capacity is continuously increasing, renewable sources can play an important role. Notwithstanding that, some important problems may appear when connecting these sources to the grid, being the overload of distribution lines one of the most relevant. In fact, renewable generation is usually connected to the nearest AC grid, although this HV system may not have been designed considering distributed generation. In the particular case of large wind farms, the electrical grid has to transmit all the power generated by wind energy and, as a consequence, the AC system may get overloaded. It is therefore necessary to determine the impact of wind power transmission so that appropriate measures can be taken. Not only are these measures influenced by the amount of power transmitted, but also by the quality of the transmitted power, due to the output voltage fluctuation caused by the highly variable nature of wind. When designing a power grid, although AC systems are usually the most economical solution because of its highly proven technology, HVDC may arise in some cases (e.g. offshore wind farms as an interesting alternative, offering some added values such as lower losses and better controllability. This way, HVDC technology can solve most of the aforementioned problems and has a good potential for future use. Additionally, the fast development of power electronics based on new and powerful semiconductor devices allow the spread of innovative technologies, such as VSC-HVDC, which can be applied to create DC grids. This paper focuses on the main aspects involved in adapting the existing overhead AC lines to DC grids, with the objective of improving the transmission of distributed renewable energy to the centers of consumption.

  11. Development of large scale fusion plasma simulation and storage grid on JAERI Origin3800 system

    International Nuclear Information System (INIS)

    Idomura, Yasuhiro; Wang, Xin

    2003-01-01

    Under the Numerical EXperiment of Tokamak (NEXT) research project, various fluid, particle, and hybrid codes have been developed. These codes require a computational environment which consists of high performance processors, high speed storage system, and high speed parallelized visualization system. In this paper, the performance of the JAERI Origin3800 system is examined from a point of view of these requests. In the performance tests, it is shown that the representative particle and fluid codes operate with 15 - 40% of processing efficiency up to 512 processors. A storage area network (SAN) provides high speed parallel data transfer. A parallel visualization system enables order to magnitude faster visualization of a large scale simulation data compared with the previous graphic workstations. Accordingly, an extremely advanced simulation environment is realized on the JAERI Origin3800 system. Recently, development of a storage grid is underway in order to improve a computational environment of remote users. The storage grid is constructed by a combination of SAN and a wavelength division multiplexer (WDM). The preliminary tests show that compared with the existing data transfer methods, it enables dramatically high speed data transfer ∼100 Gbps over a wide area network. (author)

  12. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    Science.gov (United States)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  13. Effect of wettability on scale-up of multiphase flow from core-scale to reservoir fine-grid-scale

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.C.; Mani, V.; Mohanty, K.K. [Univ. of Houston, TX (United States)

    1997-08-01

    Typical field simulation grid-blocks are internally heterogeneous. The objective of this work is to study how the wettability of the rock affects its scale-up of multiphase flow properties from core-scale to fine-grid reservoir simulation scale ({approximately} 10{prime} x 10{prime} x 5{prime}). Reservoir models need another level of upscaling to coarse-grid simulation scale, which is not addressed here. Heterogeneity is modeled here as a correlated random field parameterized in terms of its variance and two-point variogram. Variogram models of both finite (spherical) and infinite (fractal) correlation length are included as special cases. Local core-scale porosity, permeability, capillary pressure function, relative permeability functions, and initial water saturation are assumed to be correlated. Water injection is simulated and effective flow properties and flow equations are calculated. For strongly water-wet media, capillarity has a stabilizing/homogenizing effect on multiphase flow. For small variance in permeability, and for small correlation length, effective relative permeability can be described by capillary equilibrium models. At higher variance and moderate correlation length, the average flow can be described by a dynamic relative permeability. As the oil wettability increases, the capillary stabilizing effect decreases and the deviation from this average flow increases. For fractal fields with large variance in permeability, effective relative permeability is not adequate in describing the flow.

  14. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP

    Science.gov (United States)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-01

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version of the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. Other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.

  15. New grid-planning and certification approaches for the large-scale offshore-wind farm grid-connection systems

    Energy Technology Data Exchange (ETDEWEB)

    Heising, C.; Bartelt, R. [Avasition GmbH, Dortmund (Germany); Zadeh, M. Koochack; Lebioda, T.J.; Jung, J. [TenneT Offshore GmbH, Bayreuth (Germany)

    2012-07-01

    Stable operation of the offshore-wind farms (OWF) and stable grid connection under stationary and dynamic conditions are essential to achieve a stable public power supply. To reach this aim, adequate grid-planning and certification approaches are a major advantage. Within this paper, the fundamental characteristics of the offshore-wind farms and their grid-connection systems are given. The main goal of this research project is to study the stability of the offshore grid especially in terms of subharmonic stability for the likely future extension stage of the offshore grids i.e. having parallel connection of two or more HVDC links and for certain operating scenarios e.g. overload scenario. The current requirements according to the grid code are not the focus of this research project. The goal is to study and define potential additional grid code requirements, simulations, tests and grid planning methods for the future. (orig.)

  16. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    Science.gov (United States)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-02-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  17. Ensemble manifold regularization.

    Science.gov (United States)

    Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng

    2012-06-01

    We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.

  18. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    Science.gov (United States)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  19. Self-assembled large scale metal alloy grid patterns as flexible transparent conductive layers

    Science.gov (United States)

    Mohl, Melinda; Dombovari, Aron; Vajtai, Robert; Ajayan, Pulickel M.; Kordas, Krisztian

    2015-09-01

    The development of scalable synthesis techniques for optically transparent, electrically conductive coatings is in great demand due to the constantly increasing market price and limited resources of indium for indium tin oxide (ITO) materials currently applied in most of the optoelectronic devices. This work pioneers the scalable synthesis of transparent conductive films (TCFs) by exploiting the coffee-ring effect deposition coupled with reactive inkjet printing and subsequent chemical copper plating. Here we report two different promising alternatives to replace ITO, palladium-copper (PdCu) grid patterns and silver-copper (AgCu) fish scale like structures printed on flexible poly(ethylene terephthalate) (PET) substrates, achieving sheet resistance values as low as 8.1 and 4.9 Ω/sq, with corresponding optical transmittance of 79% and 65% at 500 nm, respectively. Both films show excellent adhesion and also preserve their structural integrity and good contact with the substrate for severe bending showing less than 4% decrease of conductivity even after 105 cycles. Transparent conductive films for capacitive touch screens and pixels of microscopic resistive electrodes are demonstrated.

  20. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  1. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  2. Energy modeling and analysis for optimal grid integration of large-scale variable renewables using hydrogen storage in Japan

    International Nuclear Information System (INIS)

    Komiyama, Ryoichi; Otsuki, Takashi; Fujii, Yasumasa

    2015-01-01

    Although the extensive introduction of VRs (variable renewables) will play an essential role to resolve energy and environmental issues in Japan after the Fukushima nuclear accident, its large-scale integration would pose a technical challenge in the grid management; as one of technical countermeasures, hydrogen storage receives much attention, as well as rechargeable battery, for controlling the intermittency of VR power output. For properly planning renewable energy policies, energy system modeling is important to quantify and qualitatively understand its potential benefits and impacts. This paper analyzes the optimal grid integration of large-scale VRs using hydrogen storage in Japan by developing a high time-resolution optimal power generation mix model. Simulation results suggest that the installation of hydrogen storage is promoted by both its cost reduction and CO 2 regulation policy. In addition, hydrogen storage turns out to be suitable for storing VR energy in a long period of time. Finally, through a sensitivity analysis of rechargeable battery cost, hydrogen storage is economically competitive with rechargeable battery; the cost of both technologies should be more elaborately recognized for formulating effective energy policies to integrate massive VRs into the country's power system in an economical manner. - Highlights: • Authors analyze hydrogen storage coupled with VRs (variable renewables). • Simulation analysis is done by developing an optimal power generation mix model. • Hydrogen storage installation is promoted by its cost decline and CO 2 regulation. • Hydrogen storage is suitable for storing VR energy in a long period of time. • Hydrogen storage is economically competitive with rechargeable battery

  3. Signatures of non-universal large scales in conditional structure functions from various turbulent flows

    International Nuclear Information System (INIS)

    Blum, Daniel B; Voth, Greg A; Bewley, Gregory P; Bodenschatz, Eberhard; Gibert, Mathieu; Xu Haitao; Gylfason, Ármann; Mydlarski, Laurent; Yeung, P K

    2011-01-01

    We present a systematic comparison of conditional structure functions in nine turbulent flows. The flows studied include forced isotropic turbulence simulated on a periodic domain, passive grid wind tunnel turbulence in air and in pressurized SF 6 , active grid wind tunnel turbulence (in both synchronous and random driving modes), the flow between counter-rotating discs, oscillating grid turbulence and the flow in the Lagrangian exploration module (in both constant and random driving modes). We compare longitudinal Eulerian second-order structure functions conditioned on the instantaneous large-scale velocity in each flow to assess the ways in which the large scales affect the small scales in a variety of turbulent flows. Structure functions are shown to have larger values when the large-scale velocity significantly deviates from the mean in most flows, suggesting that dependence on the large scales is typical in many turbulent flows. The effects of the large-scale velocity on the structure functions can be quite strong, with the structure function varying by up to a factor of 2 when the large-scale velocity deviates from the mean by ±2 standard deviations. In several flows, the effects of the large-scale velocity are similar at all the length scales we measured, indicating that the large-scale effects are scale independent. In a few flows, the effects of the large-scale velocity are larger on the smallest length scales. (paper)

  4. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  5. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  6. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    Science.gov (United States)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  7. Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations.

    Science.gov (United States)

    Crăciun, Cora

    2014-08-01

    CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids' EPR behaviour, for different spin system symmetries. The metrics' efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Influence of grid aspect ratio on planetary boundary layer turbulence in large-eddy simulations

    Directory of Open Access Journals (Sweden)

    S. Nishizawa

    2015-10-01

    Full Text Available We examine the influence of the grid aspect ratio of horizontal to vertical grid spacing on turbulence in the planetary boundary layer (PBL in a large-eddy simulation (LES. In order to clarify and distinguish them from other artificial effects caused by numerical schemes, we used a fully compressible meteorological LES model with a fully explicit scheme of temporal integration. The influences are investigated with a series of sensitivity tests with parameter sweeps of spatial resolution and grid aspect ratio. We confirmed that the mixing length of the eddy viscosity and diffusion due to sub-grid-scale turbulence plays an essential role in reproducing the theoretical −5/3 slope of the energy spectrum. If we define the filter length in LES modeling based on consideration of the numerical scheme, and introduce a corrective factor for the grid aspect ratio into the mixing length, the theoretical slope of the energy spectrum can be obtained; otherwise, spurious energy piling appears at high wave numbers. We also found that the grid aspect ratio has influence on the turbulent statistics, especially the skewness of the vertical velocity near the top of the PBL, which becomes spuriously large with large aspect ratio, even if a reasonable spectrum is obtained.

  9. Turbulence Enhancement by Fractal Square Grids: Effects of the Number of Fractal Scales

    Science.gov (United States)

    Omilion, Alexis; Ibrahim, Mounir; Zhang, Wei

    2017-11-01

    Fractal square grids offer a unique solution for passive flow control as they can produce wakes with a distinct turbulence intensity peak and a prolonged turbulence decay region at the expense of only minimal pressure drop. While previous studies have solidified this characteristic of fractal square grids, how the number of scales (or fractal iterations N) affect turbulence production and decay of the induced wake is still not well understood. The focus of this research is to determine the relationship between the fractal iteration N and the turbulence produced in the wake flow using well-controlled water-tunnel experiments. Particle Image Velocimetry (PIV) is used to measure the instantaneous velocity fields downstream of four different fractal grids with increasing number of scales (N = 1, 2, 3, and 4) and a conventional single-scale grid. By comparing the turbulent scales and statistics of the wake, we are able to determine how each iteration affects the peak turbulence intensity and the production/decay of turbulence from the grid. In light of the ability of these fractal grids to increase turbulence intensity with low pressure drop, this work can potentially benefit a wide variety of applications where energy efficient mixing or convective heat transfer is a key process.

  10. Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations

    Science.gov (United States)

    Crăciun, Cora

    2014-08-01

    CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids’ EPR behaviour, for different spin system symmetries. The metrics’ efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method.

  11. Investigating the Impact of Shading Effect on the Characteristics of a Large-Scale Grid-Connected PV Power Plant in Northwest China

    Directory of Open Access Journals (Sweden)

    Yunlin Sun

    2014-01-01

    Full Text Available Northwest China is an ideal region for large-scale grid-connected PV system installation due to its abundant solar radiation and vast areas. For grid-connected PV systems in this region, one of the key issues is how to reduce the shading effect as much as possible to maximize their power generation. In this paper, a shading simulation model for PV modules is established and its reliability is verified under the standard testing condition (STC in laboratory. Based on the investigation result of a 20 MWp grid-connected PV plant in northwest China, the typical shading phenomena are classified and analyzed individually, such as power distribution buildings shading and wire poles shading, plants and birds droppings shading, and front-row PV arrays shading. A series of experiments is also conducted on-site to evaluate and compare the impacts of different typical shading forms. Finally, some feasible solutions are proposed to avoid or reduce the shading effect of PV system during operation in such region.

  12. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  13. Large-scale introduction of wind power stations in the Swedish grid: a simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, L

    1978-08-01

    This report describes a simulation study on the factors to be considered if wind power were to be introduced to the south Swedish power grid on a large scale. The simulations are based upon a heuristic power generation planning model, developed for the purpose. The heuristic technique reflects the actual running strategies of a big power company with suitable accuracy. All simulations refer to certain typical days in 1976 to which all wind data and system characteristics are related. The installed amount of wind power will not be subject to optimization. All differences between planned and real wind power generation is equalized by regulation of the hydro power. The simulations made differ according to how the installed amount of wind power is handled in the power generation planning. The simulations indicate that the power system examined could well bear an introduction of wind power up to a level of 20% of the total power installed. This result is of course valid only for the days examined and does not necessarily apply to the present day structure of the system.

  14. Tile-Based Semisupervised Classification of Large-Scale VHR Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Haikel Alhichri

    2018-01-01

    Full Text Available This paper deals with the problem of the classification of large-scale very high-resolution (VHR remote sensing (RS images in a semisupervised scenario, where we have a limited training set (less than ten training samples per class. Typical pixel-based classification methods are unfeasible for large-scale VHR images. Thus, as a practical and efficient solution, we propose to subdivide the large image into a grid of tiles and then classify the tiles instead of classifying pixels. Our proposed method uses the power of a pretrained convolutional neural network (CNN to first extract descriptive features from each tile. Next, a neural network classifier (composed of 2 fully connected layers is trained in a semisupervised fashion and used to classify all remaining tiles in the image. This basically presents a coarse classification of the image, which is sufficient for many RS application. The second contribution deals with the employment of the semisupervised learning to improve the classification accuracy. We present a novel semisupervised approach which exploits both the spectral and spatial relationships embedded in the remaining unlabelled tiles. In particular, we embed a spectral graph Laplacian in the hidden layer of the neural network. In addition, we apply regularization of the output labels using a spatial graph Laplacian and the random Walker algorithm. Experimental results obtained by testing the method on two large-scale images acquired by the IKONOS2 sensor reveal promising capabilities of this method in terms of classification accuracy even with less than ten training samples per class.

  15. Modeling and Coordinated Control Strategy of Large Scale Grid-Connected Wind/Photovoltaic/Energy Storage Hybrid Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Lingguo Kong

    2015-01-01

    Full Text Available An AC-linked large scale wind/photovoltaic (PV/energy storage (ES hybrid energy conversion system for grid-connected application was proposed in this paper. Wind energy conversion system (WECS and PV generation system are the primary power sources of the hybrid system. The ES system, including battery and fuel cell (FC, is used as a backup and a power regulation unit to ensure continuous power supply and to take care of the intermittent nature of wind and photovoltaic resources. Static synchronous compensator (STATCOM is employed to support the AC-linked bus voltage and improve low voltage ride through (LVRT capability of the proposed system. An overall power coordinated control strategy is designed to manage real-power and reactive-power flows among the different energy sources, the storage unit, and the STATCOM system in the hybrid system. A simulation case study carried out on Western System Coordinating Council (WSCC 3-machine 9-bus test system for the large scale hybrid energy conversion system has been developed using the DIgSILENT/Power Factory software platform. The hybrid system performance under different scenarios has been verified by simulation studies using practical load demand profiles and real weather data.

  16. Large scale renewable power generation advances in technologies for generation, transmission and storage

    CERN Document Server

    Hossain, Jahangir

    2014-01-01

    This book focuses on the issues of integrating large-scale renewable power generation into existing grids. It includes a new protection technique for renewable generators along with the inclusion of current status of smart grid.

  17. Suggested Grid Code Modifications to Ensure Wide-Scale Adoption of Photovoltaic Energy in Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Yang, Yongheng; Enjeti, Prasad; Blaabjerg, Frede

    2013-01-01

    Current grid standards seem to largely require low power (e.g. several kilowatts) single-phase photovoltaic (PV) systems to operate at unity power factor with maximum power point tracking, and disconnect from the grid under grid faults. However, in case of a wide-scale penetration of single......-phase PV systems in the distributed grid, the disconnection under grid faults can contribute to: a) voltage flickers, b) power outages, and c) system instability. In this paper, grid code modifications are explored for wide-scale adoption of PV systems in the distribution grid. More recently, Italy...... and Japan, have undertaken a major review of standards for PV power conversion systems connected to low voltage networks. In view of this, the importance of low voltage ride-through for single-phase PV power systems under grid faults along with reactive power injection is studied in this paper. Three...

  18. The Accuracy of Remapping Irregularly Spaced Velocity Data onto a Regular Grid and the Computation of Vorticity

    National Research Council Canada - National Science Library

    Cohn, R

    1998-01-01

    .... This technique may be viewed as the molecular counterpart of PIV. To take advantage of standard data processing techniques, the MTV data need to be remapped onto a regular grid with a uniform spacing...

  19. The Accuracy of Remapping Irregularly Spaced Velocity Data onto a Regular Grid and the Computation of Vorticity

    National Research Council Canada - National Science Library

    Cohn, Richard

    1999-01-01

    .... This technique may be viewed as the molecular counterpart of PIV. To take advantage of standard data processing techniques, the MTV data need to be remapped onto a regular grid with a uniform spacing...

  20. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    Science.gov (United States)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  1. Supporting Regularized Logistic Regression Privately and Efficiently

    Science.gov (United States)

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  2. Supporting Regularized Logistic Regression Privately and Efficiently.

    Science.gov (United States)

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  3. Supporting Regularized Logistic Regression Privately and Efficiently.

    Directory of Open Access Journals (Sweden)

    Wenfa Li

    Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  4. A regularized vortex-particle mesh method for large eddy simulation

    Science.gov (United States)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  5. Geo-spatial Cognition on Human's Social Activity Space Based on Multi-scale Grids

    Directory of Open Access Journals (Sweden)

    ZHAI Weixin

    2016-12-01

    Full Text Available Widely applied location aware devices, including mobile phones and GPS receivers, have provided great convenience for collecting large volume individuals' geographical information. The researches on the human's society behavior space has attracts an increasingly number of researchers. In our research, based on location-based Flickr data From 2004 to May, 2014 in China, we choose five levels of spatial grids to form the multi-scale frame for investigate the correlation between the scale and the geo-spatial cognition on human's social activity space. The HT-index is selected as the fractal inspired by Alexander to estimate the maturity of the society activity on different scales. The results indicate that that the scale characteristics are related to the spatial cognition to a certain extent. It is favorable to use the spatial grid as a tool to control scales for geo-spatial cognition on human's social activity space.

  6. Large Scale Solar Power Integration in Distribution Grids : PV Modelling, Voltage Support and Aggregation Studies

    NARCIS (Netherlands)

    Samadi, A.

    2014-01-01

    Long term supporting schemes for photovoltaic (PV) system installation have led to accommodating large numbers of PV systems within load pockets in distribution grids. High penetrations of PV systems can cause new technical challenges, such as voltage rise due to reverse power flow during light load

  7. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...

  8. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  9. Stability of Grid-Connected PV Inverters with Large Grid Impedance Variation

    DEFF Research Database (Denmark)

    Liserre, Marco; Teodorescu, Remus; Blaabjerg, Frede

    2004-01-01

    Photovoltaic (PV) inverters used in dispersed power generation of houses in the range of 1-5 kW are currently available from several manufactures. However, large grid impedance variation is challenging the control and the grid filter design in terms of stability. In fact the PV systems are well...... suited for loads connected in a great distance to the transformer (long wires) and the situation becomes even more difficult in low-developed remote areas characterized by low power transformers and long distribution wires with high grid impedance. Hence a theoretical analysis is needed because the grid...... impedance variation leads to dynamic and stability problems both in the low frequency range (around the current controller bandwidth frequency) as well as in the high frequency range (around the LCL-filter resonance frequency). In the low frequency range the possible variation of the impedance challenges...

  10. 78 FR 70076 - Large Scale Networking (LSN)-Middleware and Grid Interagency Coordination (MAGIC) Team

    Science.gov (United States)

    2013-11-22

    ... projects. The MAGIC Team reports to the Large Scale Networking (LSN) Coordinating Group (CG). Public... Coordination (MAGIC) Team AGENCY: The Networking and Information Technology Research and Development (NITRD... MAGIC Team meetings are held on the first Wednesday of each month, 2:00-4:00 p.m., at the National...

  11. 8th international workshop on large-scale integration of wind power into power systems as well as on transmission networks for offshore wind farms. Proceedings

    International Nuclear Information System (INIS)

    Betancourt, Uta; Ackermann, Thomas

    2009-01-01

    Within the 8th International Workshop on Large-Scale Integration of Wind Power into Power Systems as well as on Transmission Networks for Offshore Wind Farms at 14th to 15th October, 2009 in Bremen (Federal Republic of Germany), lectures and posters were presented to the following sessions: (1) Keynote session and panel; (2) Grid integration studies and experience: Europe; (3) Connection of offshore wind farms; (4) Wind forecast; (5) High voltage direct current (HVDC); (6) German grid code issues; (7) Offshore grid connection; (8) Grid integration studies and experience: North America; (9) SUPWIND - Decision support tools for large scale integration of wind; (10) Windgrid - Wind on the grid: An integrated approach; (11) IEA Task 25; (12) Grid code issues; (13) Market Issues; (14) Offshore Grid; (15) Modelling; (16) Wind power and storage; (17) Power system balancing; (18) Wind turbine performance; (19) Modelling and offshore transformer.

  12. Ecogrid EU - a large scale smart grids demonstration of real time market-based integration of numerous small DER and DR

    DEFF Research Database (Denmark)

    Ding, Yi; Nyeng, Preben; Ostergaard, Jacob

    2012-01-01

    that modern information and communication technology (ICT) and innovative market solutions can enable the operation of a distribution power system with more than 50% renewable energy sources (RES). This will be a major contribution to the European 20-20-20 goals. Furthermore, the proposed Ecogrid EU market......This paper provides an overview of the Ecogrid EU project, which is a large-scale demonstration project on the Danish island Bornholm. It provides Europe a fast track evolution towards smart grid dissemination and deployment in the distribution network. Objective of Ecogrid EU is to illustrate...... will offer the transmission system operator (TSO) additional balancing resources and ancillary services by facilitating the participation of small-scale distributed energy resources (DERs) and small end-consumers into the existing electricity markets. The majority of the 2000 participating residential...

  13. Methods and apparatus of analyzing electrical power grid data

    Science.gov (United States)

    Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.

    2017-09-05

    Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interest in the large-scale data set.

  14. Large-scale ground motion simulation using GPGPU

    Science.gov (United States)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  15. An aggregate model of grid-connected, large-scale, offshore wind farm for power stability investigations-importance of windmill mechanical system

    DEFF Research Database (Denmark)

    Akhmatov, Vladislav; Knudsen, H.

    2002-01-01

    . Because the shaft system gives a soft coupling between the rotating wind turbine and the induction generator, the large-scale wind farm cannot always be reduced to one-machine equivalent and use of multi-machine equivalents will be necessary for reaching accuracy of the investigation results....... This will be in cases with irregular wind distribution over the wind farm area. The torsion mode of the shaft systems of large wind turbines is commonly in the range of 1-2 Hz and close to typical values of the electric power grid eigenfrequencies why there is a risk of oscillation between the wind turbines...... and the entire network. All these phenomena are different compared to previous experiences with modelling of conventional power plants with synchronous generators and stiff shaft systems....

  16. Grid Integration Issues for Large Scale Wind Power Plants (WPPs)

    DEFF Research Database (Denmark)

    Wu, Qiuwei; Xu, Zhao; Østergaard, Jacob

    2010-01-01

    transmission system operators (TSOs) over the world have come up the grid codes to request the wind power plants (WPPs) to have more or less the same operating capability as the conventional power plants. The grid codes requirements from other TSOs are under development. This paper covers the steady state......The penetration level of wind power into the power system over the world have been increasing very fast in the last few years and is still keeping the fast growth rate. It is just a matter of time that the wind power will be comparable to the conventional power generation. Therefore, many...

  17. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    Science.gov (United States)

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  18. Algorithm 873: LSTRS: MATLAB Software for Large-Scale Trust-Region Subproblems and Regularization

    DEFF Research Database (Denmark)

    Rojas Larrazabal, Marielba de la Caridad; Santos, Sandra A.; Sorensen, Danny C.

    2008-01-01

    A MATLAB 6.0 implementation of the LSTRS method is resented. LSTRS was described in Rojas, M., Santos, S.A., and Sorensen, D.C., A new matrix-free method for the large-scale trust-region subproblem, SIAM J. Optim., 11(3):611-646, 2000. LSTRS is designed for large-scale quadratic problems with one...... at each step. LSTRS relies on matrix-vector products only and has low and fixed storage requirements, features that make it suitable for large-scale computations. In the MATLAB implementation, the Hessian matrix of the quadratic objective function can be specified either explicitly, or in the form...... of a matrix-vector multiplication routine. Therefore, the implementation preserves the matrix-free nature of the method. A description of the LSTRS method and of the MATLAB software, version 1.2, is presented. Comparisons with other techniques and applications of the method are also included. A guide...

  19. Techno-economic analysis of large-scale integration of solar power plants in the European grid

    Energy Technology Data Exchange (ETDEWEB)

    Tielens, Pieter; Ergun, Hakan; Hertem, Dirk van [Katholieke Universiteit Leuven (Belgium). Electrical Engineering Dept.

    2012-07-01

    In this paper different options to connect large solar power plants in North Africa to the European power system are compared from a transmission system investment point of view. Three different possible DC connections from Tunisia to Italy are investigated from a cost-based perspective. In the second part of the paper, the impact of the power fluctuations from CSP and PV power plants on the frequency control is examined in a qualitative manner. It is shown that the frequency response mainly depends on the amount of PV installed and the inertia present in the grid. The results of the simulations give a first estimation of the maximum amount of PV integration in the Tunisian grid without reaching certain frequency limits after a sudden power fluctuation. (orig.)

  20. Matching soil grid unit resolutions with polygon unit scales for DNDC modelling of regional SOC pool

    Science.gov (United States)

    Zhang, H. D.; Yu, D. S.; Ni, Y. L.; Zhang, L. M.; Shi, X. Z.

    2015-03-01

    Matching soil grid unit resolution with polygon unit map scale is important to minimize uncertainty of regional soil organic carbon (SOC) pool simulation as their strong influences on the uncertainty. A series of soil grid units at varying cell sizes were derived from soil polygon units at the six map scales of 1:50 000 (C5), 1:200 000 (D2), 1:500 000 (P5), 1:1 000 000 (N1), 1:4 000 000 (N4) and 1:14 000 000 (N14), respectively, in the Tai lake region of China. Both format soil units were used for regional SOC pool simulation with DeNitrification-DeComposition (DNDC) process-based model, which runs span the time period 1982 to 2000 at the six map scales, respectively. Four indices, soil type number (STN) and area (AREA), average SOC density (ASOCD) and total SOC stocks (SOCS) of surface paddy soils simulated with the DNDC, were attributed from all these soil polygon and grid units, respectively. Subjecting to the four index values (IV) from the parent polygon units, the variation of an index value (VIV, %) from the grid units was used to assess its dataset accuracy and redundancy, which reflects uncertainty in the simulation of SOC. Optimal soil grid unit resolutions were generated and suggested for the DNDC simulation of regional SOC pool, matching with soil polygon units map scales, respectively. With the optimal raster resolution the soil grid units dataset can hold the same accuracy as its parent polygon units dataset without any redundancy, when VIV indices was assumed as criteria to the assessment. An quadratic curve regression model y = -8.0 × 10-6x2 + 0.228x + 0.211 (R2 = 0.9994, p < 0.05) was revealed, which describes the relationship between optimal soil grid unit resolution (y, km) and soil polygon unit map scale (1:x). The knowledge may serve for grid partitioning of regions focused on the investigation and simulation of SOC pool dynamics at certain map scale.

  1. Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids

    Science.gov (United States)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2014-11-01

    Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.

  2. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  3. Power grid complex network evolutions for the smart grid

    NARCIS (Netherlands)

    Pagani, Giuliano Andrea; Aiello, Marco

    2014-01-01

    The shift towards an energy grid dominated by prosumers (consumers and producers of energy) will inevitably have repercussions on the electricity distribution infrastructure. Today the grid is a hierarchical one delivering energy from large scale facilities to end-users. Tomorrow it will be a

  4. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  5. Studies of Sub-Synchronous Oscillations in Large-Scale Wind Farm Integrated System

    Science.gov (United States)

    Yue, Liu; Hang, Mend

    2018-01-01

    With the rapid development and construction of large-scale wind farms and grid-connected operation, the series compensation wind power AC transmission is gradually becoming the main way of power usage and improvement of wind power availability and grid stability, but the integration of wind farm will change the SSO (Sub-Synchronous oscillation) damping characteristics of synchronous generator system. Regarding the above SSO problem caused by integration of large-scale wind farms, this paper focusing on doubly fed induction generator (DFIG) based wind farms, aim to summarize the SSO mechanism in large-scale wind power integrated system with series compensation, which can be classified as three types: sub-synchronous control interaction (SSCI), sub-synchronous torsional interaction (SSTI), sub-synchronous resonance (SSR). Then, SSO modelling and analysis methods are categorized and compared by its applicable areas. Furthermore, this paper summarizes the suppression measures of actual SSO projects based on different control objectives. Finally, the research prospect on this field is explored.

  6. The role of the state in sustainable energy transitions: A case study of large smart grid demonstration projects in Japan

    International Nuclear Information System (INIS)

    Mah, Daphne Ngar-yin; Wu, Yun-Ying; Ip, Jasper Chi-man; Hills, Peter Ronald

    2013-01-01

    Smart grids represent one of the most significant evolutionary changes in energy management systems as they enable decentralised energy systems, the use of large-scale renewable energy as well as major improvements in demand-side-management. Japan is one of the pioneers in smart grid deployment. The Japanese model is characterised by a government-led, community-oriented, and business-driven approach with the launch of four large-scale smart-community demonstration projects. Our case study of large smart grid demonstration projects in Japan found that the Japanese government has demonstrated its high governing capacity in terms of leadership, recombinative capacity, institutional capacity, enabling capacity, and inducement capacity. However, the major limitations of the government in introducing some critical regulatory changes have constrained the smart grid deployment from advancing to a higher-order form of smart grid developments. This paper calls for more attention to be given to the importance of regulatory changes that are essential to overcome the technological lock-in, and the complementary roles of non-state actors such as the business sector and consumers to strengthen the governing capacity of the state. - Highlights: • Smart grids introduce evolutionary changes in energy management systems. • The Japanese model is government-led, community-oriented, and business-driven. • The Japanese government has demonstrated its high governing capacity. • But the limitations of the government have constrained the smart grid developments. • More attention needs to be given to regulatory changes and non-state actors

  7. A concurrent visualization system for large-scale unsteady simulations. Parallel vector performance on an NEC SX-4

    International Nuclear Information System (INIS)

    Takei, Toshifumi; Doi, Shun; Matsumoto, Hideki; Muramatsu, Kazuhiro

    2000-01-01

    We have developed a concurrent visualization system RVSLIB (Real-time Visual Simulation Library). This paper shows the effectiveness of the system when it is applied to large-scale unsteady simulations, for which the conventional post-processing approach may no longer work, on high-performance parallel vector supercomputers. The system performs almost all of the visualization tasks on a computation server and uses compressed visualized image data for efficient communication between the server and the user terminal. We have introduced several techniques, including vectorization and parallelization, into the system to minimize the computational costs of the visualization tools. The performance of RVSLIB was evaluated by using an actual CFD code on an NEC SX-4. The computational time increase due to the concurrent visualization was at most 3% for a smaller (1.6 million) grid and less than 1% for a larger (6.2 million) one. (author)

  8. [A large-scale accident in Alpine terrain].

    Science.gov (United States)

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  9. Large-area cold-cathode grid-controlled electron gun for Antares

    International Nuclear Information System (INIS)

    Scarlett, W.R.; Andrews, K.; Jansen, J.

    1979-01-01

    The CO 2 laser amplifiers used in the Antares inertial confinement fusion project require large-area radial beams of high-energy electrons to ionize the laser medium before the main discharge pulse is applied. We have designed a grid-controlled, cold-cathode electron gun with a cylindrical anode having a window area of 9.3 m 2 . A full diameter, 1/4 length prototype of the Antares gun has been built and tested. The design details of the Antares electron gun will be presented as well as test results from the prototype. Techniques used for the prevention and control of emission and breakdown from the grid will also be discussed

  10. Supervised scale-regularized linear convolutionary filters

    DEFF Research Database (Denmark)

    Loog, Marco; Lauze, Francois Bernard

    2017-01-01

    also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...

  11. Visualization of big SPH simulations via compressed octree grids

    KAUST Repository

    Reichl, Florian

    2013-10-01

    Interactive and high-quality visualization of spatially continuous 3D fields represented by scattered distributions of billions of particles is challenging. One common approach is to resample the quantities carried by the particles to a regular grid and to render the grid via volume ray-casting. In large-scale applications such as astrophysics, however, the required grid resolution can easily exceed 10K samples per spatial dimension, letting resampling approaches appear unfeasible. In this paper we demonstrate that even in these extreme cases such approaches perform surprisingly well, both in terms of memory requirement and rendering performance. We resample the particle data to a multiresolution multiblock grid, where the resolution of the blocks is dictated by the particle distribution. From this structure we build an octree grid, and we then compress each block in the hierarchy at no visual loss using wavelet-based compression. Since decompression can be performed on the GPU, it can be integrated effectively into GPU-based out-of-core volume ray-casting. We compare our approach to the perspective grid approach which resamples at run-time into a view-aligned grid. We demonstrate considerably faster rendering times at high quality, at only a moderate memory increase compared to the raw particle set. © 2013 IEEE.

  12. Nonlinear Projective-Iteration Methods for Solving Transport Problems on Regular and Unstructured Grids

    International Nuclear Information System (INIS)

    Dmitriy Y. Anistratov; Adrian Constantinescu; Loren Roberts; William Wieselquist

    2007-01-01

    This is a project in the field of fundamental research on numerical methods for solving the particle transport equation. Numerous practical problems require to use unstructured meshes, for example, detailed nuclear reactor assembly-level calculations, large-scale reactor core calculations, radiative hydrodynamics problems, where the mesh is determined by hydrodynamic processes, and well-logging problems in which the media structure has very complicated geometry. Currently this is an area of very active research in numerical transport theory. main issues in developing numerical methods for solving the transport equation are the accuracy of the numerical solution and effectiveness of iteration procedure. The problem in case of unstructured grids is that it is very difficult to derive an iteration algorithm that will be unconditionally stable

  13. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  14. Advanced Grid-Friendly Controls Demonstration Project for Utility-Scale PV Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Gevorgian, Vahan; O' Neill, Barbara

    2016-01-21

    A typical photovoltaic (PV) power plant consists of multiple power electronic inverters and can contribute to grid stability and reliability through sophisticated 'grid-friendly' controls. The availability and dissemination of actual test data showing the viability of advanced utility-scale PV controls among all industry stakeholders can leverage PV's value from being simply an energy resource to providing additional ancillary services that range from variability smoothing and frequency regulation to power quality. Strategically partnering with a selected utility and/or PV power plant operator is a key condition for a successful demonstration project. The U.S. Department of Energy's (DOE's) Solar Energy Technologies Office selected the National Renewable Energy Laboratory (NREL) to be a principal investigator in a two-year project with goals to (1) identify a potential partner(s), (2) develop a detailed scope of work and test plan for a field project to demonstrate the gird-friendly capabilities of utility-scale PV power plants, (3) facilitate conducting actual demonstration tests, and (4) disseminate test results among industry stakeholders via a joint NREL/DOE publication and participation in relevant technical conferences. The project implementation took place in FY 2014 and FY 2015. In FY14, NREL established collaborations with AES and First Solar Electric, LLC, to conduct demonstration testing on their utility-scale PV power plants in Puerto Rico and Texas, respectively, and developed test plans for each partner. Both Puerto Rico Electric Power Authority and the Electric Reliability Council of Texas expressed interest in this project because of the importance of such advanced controls for the reliable operation of their power systems under high penetration levels of variable renewable generation. During FY15, testing was completed on both plants, and a large amount of test data was produced and analyzed that demonstrates the ability of

  15. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  16. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  17. Small-Scale Smart Grid Construction and Analysis

    Science.gov (United States)

    Surface, Nicholas James

    The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.

  18. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  19. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  20. Grid scale energy storage in salt caverns

    Energy Technology Data Exchange (ETDEWEB)

    Crotogino, F.; Donadei, S.

    2011-05-15

    Fossil energy sources require some 20% of the annual consumption to be stored to secure emergency cover, cold winter supply, peak shaving, seasonal swing, load management and energy trading. Today the electric power industry benefits from the extreme high energy density of fossil and nuclear fuels. This is one important reason why e.g. the German utilities are able to provide highly reliable grid operation at a electric power storage capacity at their pumped hydro power stations of less then 1 hour (40 GWh) related to the total load in the grid - i.e. only 0,06% compared to 20% for natural gas. Along with the changeover to renewable wind-and to a lesser extent PV-based electricity production this 'outsourcing' of storage services to fossil and nuclear fuels will decline. One important way out will be grid scale energy storage in geological formations. The present discussion, research projects and plans for balancing short term wind and solar power fluctuations focus primarily on the installation of Compressed Air Energy Storages (CAES) if the capacity of existing pumped hydro plants cannot be expanded, e.g. because of environmental issues or lack of suitable topography. Because of their small energy density, these storage options are, however, generally less suitable for balancing for longer term fluctuations in case of larger amounts of excess wind power, wind flaws or even seasonal fluctuations. One important way out are large underground hydrogen storages which provide a much higher energy density because of chemical energy bond. Underground hydrogen storage is state of the art since many years in Great Britain and in the USA for the (petro-) chemical industry. (Author)

  1. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  2. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    Science.gov (United States)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  3. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  4. Regularization methods for ill-posed problems in multiple Hilbert scales

    International Nuclear Information System (INIS)

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  5. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    Science.gov (United States)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  6. Large scale rooftop photovoltaics grid connected system at Charoenphol-Rama I green building

    Energy Technology Data Exchange (ETDEWEB)

    Ketjoy, N.; Rakwichian, W. [School of Renewable Energy Technology (SERT) (Thailand); Wongchupan, V. [Panya Consultants Co., Ltd (Thailand); Sankarat, T. [Tesco Lotus, Ek-Chai Distribution System Co., Ltd. (Thailand)

    2004-07-01

    This paper presents a technical feasibility study project for the large scale rooftop photovoltaics (PV) grid connected system at Charoenphol-Rama I green building super store of TESCO LOTUS (TL) in Thailand. The objective of this project is (i) to study the technical feasibility of installation 350 kWp PV systems on the top of the roof in this site (ii) and to determine the energy produce from this system. The technical factors are examined using a computerized PVS 2000 simulation and assessment tool. This super store building located in Bangkok, with latitude 14 N, longitude 100 E and the building direction is 16 from North direction. The building roof area is 14,000 m2; with 3 degree face East and 3 degree face West pitch. Average daily solar energy in this area is approximately 5.0 kWh. The study team for this project consists of educational institution as School of Renewable Energy Technology (SERT) and private institution as Panya Consultants (PC). TL is the project owner, PC is responsible for project management, and SERT is a third party and responsible for PV system study, conceptual design and all technical process. In this feasibility studies SERT will identify the most attractive scenarios of photovoltaic cell technology (mono, poly-crystalline or thin film amorphous), system design concepts for owners (TL) and determine possibility of the energy yield of the system from different module orientation and tilt angle. The result of this study is a guide to help TL to make decision to select proper rooftop PV system option for this store with proper technology view. The economic view will not be considered in this study. (orig.)

  7. A Large Dimensional Analysis of Regularized Discriminant Analysis Classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-11-01

    This article carries out a large dimensional analysis of standard regularized discriminant analysis classifiers designed on the assumption that data arise from a Gaussian mixture model with different means and covariances. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized discriminant analsysis, in practical large but finite dimensions, and can be used to determine and pre-estimate the optimal regularization parameter that minimizes the misclassification error probability. Despite being theoretically valid only for Gaussian data, our findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from the popular USPS data base, thereby making an interesting connection between theory and practice.

  8. Large eddy simulation and laboratory experiments on the decay of grid wakes in strongly stratified flows

    International Nuclear Information System (INIS)

    Fraunie, P.; Berrella, S.; Chashechkin, Y.D.; Velasco, D.; Redondo, M.

    2008-01-01

    A detailed analysis of the flow structure resulting from the combination of turbulence and internal waves is carried out and visualized by means of the Schlieren method on waves in a strongly stratified fluid at the Laboratory of the IPM in Moscow. The joint appearance of the more regular internal wave oscillations and the small-scale turbulence that is confined vertically to the Ozmidov length scale favours the use of a simple geometrical analysis to investigate their time-space span and evolution. This provides useful information on the collapse of internal wave breaking processes in the ocean and the atmosphere. The measurements were performed under a variety of linear stratifications and different grid forcing scales, combining the grid wake and velocity shear. A numerical simulation using LES on the passage of a single bar in a linearly stratified fluid medium has been compared with the experiments identifying the different influences of the environmental agents on the actual affective vertical diffusion of the wakes. The equation of state, which connects the density and salinity, is assumed to be linear, with the coefficient of the salt contraction being included into the definition of salinity or heat. The characteristic internal waves as well as the entire beam width are related to the diameter of the bar, the Richardson number and the peak-to-peak value of oscillations. The ultimate frequency of the infinitesimal periodic internal waves is limited by the maximum buoyancy frequency relating the decrease in the vertical scale with the anisotropy of the velocity turbulent r.m.s. velocity.

  9. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  10. Large-scale data analysis of power grid resilience across multiple US service regions

    Science.gov (United States)

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  11. Grid scale energy storage in salt caverns

    Energy Technology Data Exchange (ETDEWEB)

    Crotogino, Fritz; Donadei, Sabine [KBB Underground Technologies GmbH, Hannover (Germany)

    2009-07-01

    Fossil energy sources require some 20% of the annual consumption to be stored to secure emergency cover, peak shaving, seasonal balancing, etc. Today the electric power industry benefits from the extreme high energy density of fossil fuels. This is one important reason why the German utilities are able to provide highly reliable grid operation at a electric power storage capacity at their pumped hydro power stations of less then 1 hour (40 GWh) related to the total load in the grid - i.e. only 0,06% related to natural gas. Along with the changeover to renewable wind based electricity production this ''outsourcing'' of storage services to fossil fuels will decline. One important way out will be grid scale energy storage. The present discussion for balancing short term wind and solar power fluctuations focuses primarily on the installation of Compressed Air Energy Storages (CAES) in addition to existing pumped hydro plants. Because of their small energy density, these storage options are, however, generally not suitable for balancing for longer term fluctuations in case of larger amounts of excess wind power or even seasonal fluctuations. Underground hydrogen storages, however, provide a much higher energy density because of chemical energy bond - standard practice since many years. The first part of the article describes the present status and performance of grid scale energy storages in geological formations, mainly salt caverns. It is followed by a compilation of generally suitable locations in Europe and particularly Germany. The second part deals with first results of preliminary investigations in possibilities and limits of offshore CAES power stations. (orig.)

  12. Effect of grid resolution on large eddy simulation of wall-bounded turbulence

    Science.gov (United States)

    Rezaeiravesh, S.; Liefvendahl, M.

    2018-05-01

    The effect of grid resolution on a large eddy simulation (LES) of a wall-bounded turbulent flow is investigated. A channel flow simulation campaign involving a systematic variation of the streamwise (Δx) and spanwise (Δz) grid resolution is used for this purpose. The main friction-velocity-based Reynolds number investigated is 300. Near the walls, the grid cell size is determined by the frictional scaling, Δx+ and Δz+, and strongly anisotropic cells, with first Δy+ ˜ 1, thus aiming for the wall-resolving LES. Results are compared to direct numerical simulations, and several quality measures are investigated, including the error in the predicted mean friction velocity and the error in cross-channel profiles of flow statistics. To reduce the total number of channel flow simulations, techniques from the framework of uncertainty quantification are employed. In particular, a generalized polynomial chaos expansion (gPCE) is used to create metamodels for the errors over the allowed parameter ranges. The differing behavior of the different quality measures is demonstrated and analyzed. It is shown that friction velocity and profiles of the velocity and Reynolds stress tensor are most sensitive to Δz+, while the error in the turbulent kinetic energy is mostly influenced by Δx+. Recommendations for grid resolution requirements are given, together with the quantification of the resulting predictive accuracy. The sensitivity of the results to the subgrid-scale (SGS) model and varying Reynolds number is also investigated. All simulations are carried out with second-order accurate finite-volume-based solver OpenFOAM. It is shown that the choice of numerical scheme for the convective term significantly influences the error portraits. It is emphasized that the proposed methodology, involving the gPCE, can be applied to other modeling approaches, i.e., other numerical methods and the choice of SGS model.

  13. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  14. Evaluation of Aggregators for Integration of Large-scale Consumers in Smart Grid

    DEFF Research Database (Denmark)

    Rahnama, Samira; Shafiei, Seyed Ehsan; Stoustrup, Jakob

    2014-01-01

    Utilization of consumers to mitigate the impact of increasing renewable resources on power systems is one of the visions of future smart grids. Flexible consumers are consumers who can change their consumption patterns in such a way as to help the grid to tackle the balancing problem. In previous...... will evaluate the proposed set-up to understand to what extent the utilization of simplified models can lead to reasonable results. To this end, we will connect the aggregator to a complex and verified model of an actual supermarket refrigeration system which enables us to investigate the closed-loop behavior...

  15. The DataGrid Project

    CERN Document Server

    Ruggieri, F

    2001-01-01

    An overview of the objectives and status of the DataGrid Project is presented, together with a brief introduction to the Grid metaphor and some references to the Grid activities and initiatives related to DataGrid. High energy physics experiments have always requested state of the art computing facilities to efficiently perform several computing activities related with the handling of large amounts of data and fairly large computing resources. Some of the ideas born inside the community to enhance the user friendliness of all the steps in the computing chain have been, sometimes, successfully applied also in other contexts: one bright example is the World Wide Web. The LHC computing challenge has triggered inside the high energy physics community, the start of the DataGrid Project. The objective of the project is to enable next generation scientific exploration requiring intensive computation and analysis of shared large-scale databases. (12 refs).

  16. Sub-grid scale combustion models for large eddy simulation of unsteady premixed flame propagation around obstacles.

    Science.gov (United States)

    Di Sarli, Valeria; Di Benedetto, Almerinda; Russo, Gennaro

    2010-08-15

    In this work, an assessment of different sub-grid scale (sgs) combustion models proposed for large eddy simulation (LES) of steady turbulent premixed combustion (Colin et al., Phys. Fluids 12 (2000) 1843-1863; Flohr and Pitsch, Proc. CTR Summer Program, 2000, pp. 61-82; Kim and Menon, Combust. Sci. Technol. 160 (2000) 119-150; Charlette et al., Combust. Flame 131 (2002) 159-180; Pitsch and Duchamp de Lageneste, Proc. Combust. Inst. 29 (2002) 2001-2008) was performed to identify the model that best predicts unsteady flame propagation in gas explosions. Numerical results were compared to the experimental data by Patel et al. (Proc. Combust. Inst. 29 (2002) 1849-1854) for premixed deflagrating flame in a vented chamber in the presence of three sequential obstacles. It is found that all sgs combustion models are able to reproduce qualitatively the experiment in terms of step of flame acceleration and deceleration around each obstacle, and shape of the propagating flame. Without adjusting any constants and parameters, the sgs model by Charlette et al. also provides satisfactory quantitative predictions for flame speed and pressure peak. Conversely, the sgs combustion models other than Charlette et al. give correct predictions only after an ad hoc tuning of constants and parameters. Copyright 2010 Elsevier B.V. All rights reserved.

  17. A novel algorithm for incompressible flow using only a coarse grid projection

    KAUST Repository

    Lentine, Michael

    2010-07-26

    Large scale fluid simulation can be difficult using existing techniques due to the high computational cost of using large grids. We present a novel technique for simulating detailed fluids quickly. Our technique coarsens the Eulerian fluid grid during the pressure solve, allowing for a fast implicit update but still maintaining the resolution obtained with a large grid. This allows our simulations to run at a fraction of the cost of existing techniques while still providing the fine scale structure and details obtained with a full projection. Our algorithm scales well to very large grids and large numbers of processors, allowing for high fidelity simulations that would otherwise be intractable. © 2010 ACM.

  18. Matrix regularization of embedded 4-manifolds

    International Nuclear Information System (INIS)

    Trzetrzelewski, Maciej

    2012-01-01

    We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).

  19. Dynamic Reactive Power Compensation of Large Scale Wind Integrated Power System

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2015-01-01

    wind turbines especially wind farms with additional grid support functionalities like dynamic support (e,g dynamic reactive power support etc.) and ii) refurbishment of existing conventional central power plants to synchronous condensers could be one of the efficient, reliable and cost effective option......Due to progressive displacement of conventional power plants by wind turbines, dynamic security of large scale wind integrated power systems gets significantly compromised. In this paper we first highlight the importance of dynamic reactive power support/voltage security in large scale wind...... integrated power systems with least presence of conventional power plants. Then we propose a mixed integer dynamic optimization based method for optimal dynamic reactive power allocation in large scale wind integrated power systems. One of the important aspects of the proposed methodology is that unlike...

  20. Accelerating Large Data Analysis By Exploiting Regularities

    Science.gov (United States)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  1. Active power reserves evaluation in large scale PVPPs

    DEFF Research Database (Denmark)

    Crăciun, Bogdan-Ionut; Kerekes, Tamas; Sera, Dezso

    2013-01-01

    The present trend on investing in renewable ways of producing electricity in the detriment of conventional fossil fuel-based plants will lead to a certain point where these plants have to provide ancillary services and contribute to overall grid stability. Photovoltaic (PV) power has the fastest...... growth among all renewable energies and managed to reach high penetration levels creating instabilities which at the moment are corrected by the conventional generation. This paradigm will change in the future scenarios where most of the power is supplied by large scale renewable plants and parts...... of the ancillary services have to be shared by the renewable plants. The main focus of the proposed paper is to technically and economically analyze the possibility of having active power reserves in large scale PV power plants (PVPPs) without any auxiliary storage equipment. The provided reserves should...

  2. Voltage stability issues for a benchmark grid model including large scale wind power

    DEFF Research Database (Denmark)

    Eek, J.; Lund, T.; Marzio, G. Di

    2006-01-01

    The objective of the paper is to investigate how the voltage stability of a relatively weak network after a grid fault is affected by the connection of a large wind park. A theoretical discussion of the stationary and dynamic characteristics of the Short Circuit Induction Generator and the Doubly...... Fed Induction Generator is given. Further, a case study of a wind park connected to the transmission system through an existing regional 132 kV regional distribution line is presented. For the SCIG it is concluded that a stationary torque curve calculated under consideration of the impedance...... of the network and saturation of the external reactive power compensation units provides a good basis for evaluation of the voltage stability. For the DFIG it is concluded that the speed stability limit is mainly determined by the voltage limitation of the rotor converter...

  3. Power grid reliability and security

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Anjan [Washington State Univ., Pullman, WA (United States); Venkatasubramanian, Vaithianathan [Washington State Univ., Pullman, WA (United States); Hauser, Carl [Washington State Univ., Pullman, WA (United States); Bakken, David [Washington State Univ., Pullman, WA (United States); Anderson, David [Washington State Univ., Pullman, WA (United States); Zhao, Chuanlin [Washington State Univ., Pullman, WA (United States); Liu, Dong [Washington State Univ., Pullman, WA (United States); Yang, Tao [Washington State Univ., Pullman, WA (United States); Meng, Ming [Washington State Univ., Pullman, WA (United States); Zhang, Lin [Washington State Univ., Pullman, WA (United States); Ning, Jiawei [Washington State Univ., Pullman, WA (United States); Tashman, Zaid [Washington State Univ., Pullman, WA (United States)

    2015-01-31

    This project has led to the development of a real-time simulation platform for electric power grids called Grid Simulator or GridSim for simulating the dynamic and information network interactions of large- scale power systems. The platform consists of physical models of power system components including synchronous generators, loads and control, which are simulated using a modified commercial power simulator namely Transient Stability Analysis Tool (TSAT) [1] together with data cleanup components, as well as an emulated substation level and wide-area power analysis components. The platform also includes realistic representations of communication network middleware that can emulate the real-time information flow back and forth between substations and control centers in wide-area power systems. The platform has been validated on a realistic 6000-bus model of the western American power system. The simulator GridSim developed in this project is the first of its kind in its ability to simulate real-time response of large-scale power grids, and serves as a cost effective real-time stability and control simulation platform for power industry.

  4. Data warehousing technologies for large-scale and right-time data

    DEFF Research Database (Denmark)

    Xiufeng, Liu

    heterogeneous sources into a central data warehouse (DW) by Extract-Transform-Load (ETL) at regular time intervals, e.g., monthly, weekly, or daily. But now, it becomes challenging for large-scale data, and hard to meet the near real-time/right-time business decisions. This thesis considers some...

  5. Biogas infrastructure from farm-scale to regional scale, line-pack storage in biogas grids

    NARCIS (Netherlands)

    Hengeveld, Evert Jan

    2016-01-01

    Biogas infrastructure from farm-scale to regional scale, line-pack storage in biogas grids. The number of local and regional initiatives encouraging the production and use of regional produced energy grows. In these new developments biogas can play a role, as a producer of energy, but also in

  6. TopoSCALE v.1.0: downscaling gridded climate data in complex terrain

    Science.gov (United States)

    Fiddes, J.; Gruber, S.

    2014-02-01

    Simulation of land surface processes is problematic in heterogeneous terrain due to the the high resolution required of model grids to capture strong lateral variability caused by, for example, topography, and the lack of accurate meteorological forcing data at the site or scale it is required. Gridded data products produced by atmospheric models can fill this gap, however, often not at an appropriate spatial resolution to drive land-surface simulations. In this study we describe a method that uses the well-resolved description of the atmospheric column provided by climate models, together with high-resolution digital elevation models (DEMs), to downscale coarse-grid climate variables to a fine-scale subgrid. The main aim of this approach is to provide high-resolution driving data for a land-surface model (LSM). The method makes use of an interpolation of pressure-level data according to topographic height of the subgrid. An elevation and topography correction is used to downscale short-wave radiation. Long-wave radiation is downscaled by deriving a cloud-component of all-sky emissivity at grid level and using downscaled temperature and relative humidity fields to describe variability with elevation. Precipitation is downscaled with a simple non-linear lapse and optionally disaggregated using a climatology approach. We test the method in comparison with unscaled grid-level data and a set of reference methods, against a large evaluation dataset (up to 210 stations per variable) in the Swiss Alps. We demonstrate that the method can be used to derive meteorological inputs in complex terrain, with most significant improvements (with respect to reference methods) seen in variables derived from pressure levels: air temperature, relative humidity, wind speed and incoming long-wave radiation. This method may be of use in improving inputs to numerical simulations in heterogeneous and/or remote terrain, especially when statistical methods are not possible, due to lack of

  7. Challenges and options for large scale integration of wind power

    International Nuclear Information System (INIS)

    Tande, John Olav Giaever

    2006-01-01

    Challenges and options for large scale integration of wind power are examined. Immediate challenges are related to weak grids. Assessment of system stability requires numerical simulation. Models are being developed - validation is essential. Coordination of wind and hydro generation is a key for allowing more wind power capacity in areas with limited transmission corridors. For the case study grid depending on technology and control the allowed wind farm size is increased from 50 to 200 MW. The real life example from 8 January 2005 demonstrates that existing marked based mechanisms can handle large amounts of wind power. In wind integration studies it is essential to take account of the controllability of modern wind farms, the power system flexibility and the smoothing effect of geographically dispersed wind farms. Modern wind farms contribute to system adequacy - combining wind and hydro constitutes a win-win system (ml)

  8. Methodology to determine the technical performance and value proposition for grid-scale energy storage systems :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Loose, Verne William; Donnelly, Matthew K.; Trudnowski, Daniel J.

    2012-12-01

    As the amount of renewable generation increases, the inherent variability of wind and photovoltaic systems must be addressed in order to ensure the continued safe and reliable operation of the nation's electricity grid. Grid-scale energy storage systems are uniquely suited to address the variability of renewable generation and to provide other valuable grid services. The goal of this report is to quantify the technical performance required to provide di erent grid bene ts and to specify the proper techniques for estimating the value of grid-scale energy storage systems.

  9. Efficient Pseudorecursive Evaluation Schemes for Non-adaptive Sparse Grids

    KAUST Repository

    Buse, Gerrit; Pflü ger, Dirk; Jacob, Riko

    2014-01-01

    In this work we propose novel algorithms for storing and evaluating sparse grid functions, operating on regular (not spatially adaptive), yet potentially dimensionally adaptive grid types. Besides regular sparse grids our approach includes truncated

  10. Querying Large Physics Data Sets Over an Information Grid

    CERN Document Server

    Baker, N; Kovács, Z; Le Goff, J M; McClatchey, R

    2001-01-01

    Optimising use of the Web (WWW) for LHC data analysis is a complex problem and illustrates the challenges arising from the integration of and computation across massive amounts of information distributed worldwide. Finding the right piece of information can, at times, be extremely time-consuming, if not impossible. So-called Grids have been proposed to facilitate LHC computing and many groups have embarked on studies of data replication, data migration and networking philosophies. Other aspects such as the role of 'middleware' for Grids are emerging as requiring research. This paper positions the need for appropriate middleware that enables users to resolve physics queries across massive data sets. It identifies the role of meta-data for query resolution and the importance of Information Grids for high-energy physics analysis rather than just Computational or Data Grids. This paper identifies software that is being implemented at CERN to enable the querying of very large collaborating HEP data-sets, initially...

  11. Are small-scale grid-connected photovoltaic systems a cost-effective policy for lowering electricity bills and reducing carbon emissions? A technical, economic, and carbon emission analysis

    International Nuclear Information System (INIS)

    McHenry, Mark P.

    2012-01-01

    This research discusses findings from technical simulations and economic models of 1 kW p and 3 kW p grid-connected photovoltaic (PV) systems supplying a rural home electricity load in parallel with the electricity network in Western Australia (WA). The technical simulations are based on electricity billing, consumption monitoring, an energy audit data, combined with 15 min interval load and PV system performance for commercially available technologies and balance of system components, using long-term meteorological input data. The economic modelling uses 2010 market prices for capital costs, operational costs, electricity tariffs, subsidies, and is based on discounted cash flow analyses which generate a final net present value (NPV) for each system against network electricity costs (in Australian dollars, AUD) over a 15 year investment horizon. The results suggest that current market prices generate a negative NPV (a net private loss), even with the current government subsidies, which lead to higher home electricity costs than conventional network electricity use. Additionally, the private costs of carbon emission mitigation (AUD tCO 2 -e −1 ) for the grid-connected PV system simulations and models were around AUD 600-700 tCO 2 -e −1 , a particularly expensive option when compared to existing large-scale renewable energy mitigation activities. - Highlights: ► Subsidised small-scale grid-connected PV systems can increase home electricity costs. ► Subsidies for private PV systems are provided by those who do not receive a benefit. ► Small-scale grid-connected PV systems result in very high costs of mitigation. ► Verifying actual mitigation from grid-connected small-scale systems is problematic. ► Maintain medium/large-scale grid-connected or small-scale off-grid system subsidies.

  12. Exploration of potential of Smart Grids at the scale of the University campus in Provence

    International Nuclear Information System (INIS)

    2014-09-01

    This study notably aimed at determining how strategies provided by smart grid technologies operate in an existing area where electric equipment and infrastructures are already largely developed, which perspectives of evolution these technologies offer for the power demand curve of the electric power distribution network, and how to assess benefits associated with the implementation of these technologies at the scale of an existing area. After a presentation of various concepts, the report presents a simplified model of the electric power consumption structure for the studied area (a university campus). The next part proposes an assessment of potentials related to smart grid technologies by using six scenarios and by modelling their effect. The different possible strategies are then analysed

  13. Lightning Surge Analysis on a Large Scale Grid-Connected Solar Photovoltaic System

    Directory of Open Access Journals (Sweden)

    Nur Hazirah Zaini

    2017-12-01

    Full Text Available Solar photovoltaic (PV farms currently play a vital role in the generation of electrical power in different countries, such as Malaysia, which is moving toward the use of renewable energy. Malaysia is one of the countries with abundant sunlight and thus can use solar PV farms as alternative sources for electricity generation. However, lightning strikes frequently occur in the country. Being installed in open and flat areas, solar PV farms, especially their electronic components, are at great risk of damage caused by lightning. In this paper, the effects of lightning currents with different peak currents and waveshapes on grid-connected solar PV farms were determined to approximate the level of transient effect that can damage solar PV modules, inverters and transformers. Depending on the location of the solar PV farm, engineer can obtain information on the peak current and median current of the site from the lightning location system (LLS and utilise the results obtained in this study to appropriately assign an SPD to protect the solar panel, inverter and the main panel that connected to the grid. Therefore, the simulation results serve as the basis for controlling the effects of lightning strikes on electrical equipment and power grids where it provides proper justification on the ‘where to be installed’ and ‘what is the rating’ of the SPD. This judgment and decision will surely reduce the expensive cost of repair and replacement of electrical equipment damages due to the lightning.

  14. Diffusion of charged particles in strong large-scale random and regular magnetic fields

    International Nuclear Information System (INIS)

    Mel'nikov, Yu.P.

    2000-01-01

    The nonlinear collision integral for the Green's function averaged over a random magnetic field is transformed using an iteration procedure taking account of the strong random scattering of particles on the correlation length of the random magnetic field. Under this transformation the regular magnetic field is assumed to be uniform at distances of the order of the correlation length. The single-particle Green's functions of the scattered particles in the presence of a regular magnetic field are investigated. The transport coefficients are calculated taking account of the broadening of the cyclotron and Cherenkov resonances as a result of strong random scattering. The mean-free path lengths parallel and perpendicular to the regular magnetic field are found for a power-law spectrum of the random field. The analytical results obtained are compared with the experimental data on the transport ranges of solar and galactic cosmic rays in the interplanetary magnetic field. As a result, the conditions for the propagation of cosmic rays in the interplanetary space and a more accurate idea of the structure of the interplanetary magnetic field are determined

  15. Stochastic dynamic modeling of regular and slow earthquakes

    Science.gov (United States)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  16. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram

    2017-03-06

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  17. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram; Kammoun, Abla; Alouini, Mohamed-Slim

    2017-01-01

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  18. An Efficient Approach for Fast and Accurate Voltage Stability Margin Computation in Large Power Grids

    Directory of Open Access Journals (Sweden)

    Heng-Yi Su

    2016-11-01

    Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.

  19. Throughput Analysis of Large Wireless Networks with Regular Topologies

    Directory of Open Access Journals (Sweden)

    Hong Kezhu

    2007-01-01

    Full Text Available The throughput of large wireless networks with regular topologies is analyzed under two medium-access control schemes: synchronous array method (SAM and slotted ALOHA. The regular topologies considered are square, hexagon, and triangle. Both nonfading channels and Rayleigh fading channels are examined. Furthermore, both omnidirectional antennas and directional antennas are considered. Our analysis shows that the SAM leads to a much higher network throughput than the slotted ALOHA. The network throughput in this paper is measured in either bits-hops per second per Hertz per node or bits-meters per second per Hertz per node. The exact connection between the two measures is shown for each topology. With these two fundamental units, the network throughput shown in this paper can serve as a reliable benchmark for future works on network throughput of large networks.

  20. Throughput Analysis of Large Wireless Networks with Regular Topologies

    Directory of Open Access Journals (Sweden)

    Kezhu Hong

    2007-04-01

    Full Text Available The throughput of large wireless networks with regular topologies is analyzed under two medium-access control schemes: synchronous array method (SAM and slotted ALOHA. The regular topologies considered are square, hexagon, and triangle. Both nonfading channels and Rayleigh fading channels are examined. Furthermore, both omnidirectional antennas and directional antennas are considered. Our analysis shows that the SAM leads to a much higher network throughput than the slotted ALOHA. The network throughput in this paper is measured in either bits-hops per second per Hertz per node or bits-meters per second per Hertz per node. The exact connection between the two measures is shown for each topology. With these two fundamental units, the network throughput shown in this paper can serve as a reliable benchmark for future works on network throughput of large networks.

  1. Demonstration of Active Power Controls by Utility-Scale PV Power Plant in an Island Grid: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Gevorgian, Vahan; O' Neill, Barbara

    2017-02-01

    The National Renewable Energy Laboratory (NREL), AES, and the Puerto Rico Electric Power Authority conducted a demonstration project on a utility-scale photovoltaic (PV) plant to test the viability of providing important ancillary services from this facility. As solar generation increases globally, there is a need for innovation and increased operational flexibility. A typical PV power plant consists of multiple power electronic inverters and can contribute to grid stability and reliability through sophisticated 'grid-friendly' controls. In this way, it may mitigate the impact of its variability on the grid and contribute to important system requirements more like traditional generators. In 2015, testing was completed on a 20-MW AES plant in Puerto Rico, and a large amount of test data was produced and analyzed that demonstrates the ability of PV power plants to provide various types of new grid-friendly controls. This data showed how active power controls can leverage PV's value from being simply an intermittent energy resource to providing additional ancillary services for an isolated island grid. Specifically, the tests conducted included PV plant participation in automatic generation control, provision of droop response, and fast frequency response.

  2. A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.

    Science.gov (United States)

    Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu

    2017-10-01

    The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.

  3. Planet-Scale grid A particle collier leads data grid developers to unprecedented dimensions

    CERN Multimedia

    Thibodeau, Patrick

    2005-01-01

    In 2007, scientists will begin smashing protons and ions together in a massive, multinational experiment to understand what the universe looked like tiny fractions of a second after the Big Bang. The particle accelerator used in this test will release a vast flood of data on a scale unlike anything seen before, and for that scientists will need a computing grid of equally great capability

  4. Grid computing in high-energy physics

    International Nuclear Information System (INIS)

    Bischof, R.; Kuhn, D.; Kneringer, E.

    2003-01-01

    Full text: The future high energy physics experiments are characterized by an enormous amount of data delivered by the large detectors presently under construction e.g. at the Large Hadron Collider and by a large number of scientists (several thousands) requiring simultaneous access to the resulting experimental data. Since it seems unrealistic to provide the necessary computing and storage resources at one single place, (e.g. CERN), the concept of grid computing i.e. the use of distributed resources, will be chosen. The DataGrid project (under the leadership of CERN) develops, based on the Globus toolkit, the software necessary for computation and analysis of shared large-scale databases in a grid structure. The high energy physics group Innsbruck participates with several resources in the DataGrid test bed. In this presentation our experience as grid users and resource provider is summarized. In cooperation with the local IT-center (ZID) we installed a flexible grid system which uses PCs (at the moment 162) in student's labs during nights, weekends and holidays, which is especially used to compare different systems (local resource managers, other grid software e.g. from the Nordugrid project) and to supply a test bed for the future Austrian Grid (AGrid). (author)

  5. Fractal Characteristics Analysis of Blackouts in Interconnected Power Grid

    DEFF Research Database (Denmark)

    Wang, Feng; Li, Lijuan; Li, Canbing

    2018-01-01

    The power failure models are a key to understand the mechanism of large scale blackouts. In this letter, the similarity of blackouts in interconnected power grids (IPGs) and their sub-grids is discovered by the fractal characteristics analysis to simplify the failure models of the IPG. The distri......The power failure models are a key to understand the mechanism of large scale blackouts. In this letter, the similarity of blackouts in interconnected power grids (IPGs) and their sub-grids is discovered by the fractal characteristics analysis to simplify the failure models of the IPG....... The distribution characteristics of blackouts in various sub-grids are demonstrated based on the Kolmogorov-Smirnov (KS) test. The fractal dimensions (FDs) of the IPG and its sub-grids are then obtained by using the KS test and the maximum likelihood estimation (MLE). The blackouts data in China were used...

  6. Research on unit commitment with large-scale wind power connected power system

    Science.gov (United States)

    Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing

    2017-01-01

    Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.

  7. Large temporal scale and capacity subsurface bulk energy storage with CO2

    Science.gov (United States)

    Saar, M. O.; Fleming, M. R.; Adams, B. M.; Ogland-Hand, J.; Nelson, E. S.; Randolph, J.; Sioshansi, R.; Kuehn, T. H.; Buscheck, T. A.; Bielicki, J. M.

    2017-12-01

    Decarbonizing energy systems by increasing the penetration of variable renewable energy (VRE) technologies requires efficient and short- to long-term energy storage. Very large amounts of energy can be stored in the subsurface as heat and/or pressure energy in order to provide both short- and long-term (seasonal) storage, depending on the implementation. This energy storage approach can be quite efficient, especially where geothermal energy is naturally added to the system. Here, we present subsurface heat and/or pressure energy storage with supercritical carbon dioxide (CO2) and discuss the system's efficiency, deployment options, as well as its advantages and disadvantages, compared to several other energy storage options. CO2-based subsurface bulk energy storage has the potential to be particularly efficient and large-scale, both temporally (i.e., seasonal) and spatially. The latter refers to the amount of energy that can be stored underground, using CO2, at a geologically conducive location, potentially enabling storing excess power from a substantial portion of the power grid. The implication is that it would be possible to employ centralized energy storage for (a substantial part of) the power grid, where the geology enables CO2-based bulk subsurface energy storage, whereas the VRE technologies (solar, wind) are located on that same power grid, where (solar, wind) conditions are ideal. However, this may require reinforcing the power grid's transmission lines in certain parts of the grid to enable high-load power transmission from/to a few locations.

  8. Grid computing in pakistan and: opening to large hadron collider experiments

    International Nuclear Information System (INIS)

    Batool, N.; Osman, A.; Mahmood, A.; Rana, M.A.

    2009-01-01

    A grid computing facility was developed at sister institutes Pakistan Institute of Nuclear Science and Technology (PINSTECH) and Pakistan Institute of Engineering and Applied Sciences (PIEAS) in collaboration with Large Hadron Collider (LHC) Computing Grid during early years of the present decade. The Grid facility PAKGRID-LCG2 as one of the grid node in Pakistan was developed employing mainly local means and is capable of supporting local and international research and computational tasks in the domain of LHC Computing Grid. Functional status of the facility is presented in terms of number of jobs performed. The facility developed provides a forum to local researchers in the field of high energy physics to participate in the LHC experiments and related activities at European particle physics research laboratory (CERN), which is one of the best physics laboratories in the world. It also provides a platform of an emerging computing technology (CT). (author)

  9. Received signal strength in large-scale wireless relay sensor network: a stochastic ray approach

    NARCIS (Netherlands)

    Hu, L.; Chen, Y.; Scanlon, W.G.

    2011-01-01

    The authors consider a point percolation lattice representation of a large-scale wireless relay sensor network (WRSN) deployed in a cluttered environment. Each relay sensor corresponds to a grid point in the random lattice and the signal sent by the source is modelled as an ensemble of photons that

  10. Application scenario analysis of Power Grid Marketing Large Data

    Science.gov (United States)

    Li, Xin; Zhang, Yuan; Zhang, Qianyu

    2018-01-01

    In recent years, large data has become an important strategic asset in the commercial economy, and its efficient management and application has become the focus of government, enterprise and academia. Power grid marketing data covers real data of electricity and other energy consumption and consumption costs and so on, which is closely related to each customer and the overall economic operation. Fully tap the inherent value of marketing data is of great significance for power grid company to make rapid and efficient response to the market demand and improve service level. The development of large data technology provides a new technical scheme for the development of marketing business under the new situation. Based on the study on current situation of marketing business, marketing information system and marketing data, this paper puts forward the application direction of marketing data and designed typical scenes for internal and external applications.

  11. Deconstructing the concept of renewable energy-based mini-grids for rural electrification in East Africa

    DEFF Research Database (Denmark)

    Pedersen, Mathilde Brix

    2016-01-01

    The goal of providing universal energy access to all by 2030 under the UN-led SE4ALL initiative calls for new and innovative solutions to rural electrification and is fuelling the recent interest in mini-grids. Mini-grid solutions are emerging as a third alternative to rural electrification, coming...... between the option of large-scale grid extension and pico-scale stand-alone solutions like solar home systems or solar lanterns. International expectations of mini-grids are high, with the International Energy Agency suggesting that they will play a significant role in reaching the goal of universal...... electrification and the challenges identified in the literature, the study concludes by proposing three avenues for further research. For further resources related to this article, please visit the WIREs website....

  12. Expected Future Conditions for Secure Power Operation with Large Scale of RES Integration

    International Nuclear Information System (INIS)

    Majstrovic, G.; Majstrovic, M.; Sutlovic, E.

    2015-01-01

    EU energy strategy is strongly focused on the large scale integration of renewable energy sources. The most dominant part here is taken by variable sources - wind power plants. Grid integration of intermittent sources along with keeping the system stable and secure is one of the biggest challenges for the TSOs. This part is often neglected by the energy policy makers, so this paper deals with expected future conditions for secure power system operation with large scale wind integration. It gives an overview of expected wind integration development in EU, as well as expected P/f regulation and control needs. The paper is concluded with several recommendations. (author).

  13. FY 1998 Report on development of large-scale wind power generation systems. Feasibility study on development of new technologies for wind power generation (Study on the development of wind power generation systems for small-scale power grids); 1998 nendo ogata furyoku hatsuden system kaihatsu seika hokokusho. Furyoku hatsuden shingijutsu kaihatsu kanosei chosa (shokibo keito ni okeru furyoku hatsuden system ni kansuru chosa)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    This survey includes the characteristics of small-scale power grids, feasibility studies on introduction of wind turbines in these grids, and statuses of application of wind turbines to isolated islands or the like in the advanced countries, in order to promote introduction of wind power generation systems in isolated islands or the like. It is concluded that small-capacity wind power generation systems can be possibly introduced in the intermediate- to large-scale grids in isolated islands, 1,500kW or larger in capacity, in the Tokyo, Kyushu and Okinawa Electric Power Companies' areas. A scheduled steamer ship for isolated islands can carry up to 10 ton track, and introduction of a small-scale wind turbine is more advantageous viewed from the transportation cost. Some foreign countries have the sites which have achieved a high percentage of grid connection of wind power units by stabilizing wind conditions and connecting them to the main high-voltage grids in different manners from those adopted in Japan. For developing wind turbine bodies, most of the foreign countries surveyed are concentrating their efforts on development and manufacture of large-size units, paying little attention on development of small-size wind turbines for isolated islands. For the future prospects, the promising concepts include adoption of wind turbines small in capacity and easy to transport and assemble, and hybrid systems combined with power storage units. (NEDO)

  14. Large-scale deployment of electric vehicles in Germany by 2030: An analysis of grid-to-vehicle and vehicle-to-grid concepts

    International Nuclear Information System (INIS)

    Loisel, Rodica; Pasaoglu, Guzay; Thiel, Christian

    2014-01-01

    This study analyses battery electric vehicles (BEVs) in the future German power system and makes projections of the BEVs hourly load profile by car size (‘mini’, ‘small’, ‘compact’ and ‘large’). By means of a power plant dispatching optimisation model, the study assesses the optimal BEV charging/discharging strategies in grid-to-vehicle (G2V) and vehicle-to-grid (V2G) schemes. The results show that the 2% rise in power demand required to power these BEVs does not hamper system stability provided an optimal G2V scheme is applied. Moreover, such BEV deployment can contribute to further integrating wind and solar power generation. Applying a V2G scheme would increase the capacity factors of base and mid-load power plants, leading to a higher integration of intermittent renewables and resulting in a decrease in system costs. However, the evaluation of the profitability of BEVs shows that applying a V2G scheme is not a viable economic option due to the high cost of investing in batteries. Some BEV owners would make modest profits (€6 a year), but a higher number would sustain losses, for reasons of scale. For BEVs to become part of the power system, further incentives are necessary to make the business model attractive to car owners. - Highlights: • Optimal strategies for charging/discharging battery electric vehicles are assessed. • G2V scheme improves the stability of the future German power system. • V2G scheme would increase the capacity factors of base and mid-load power plants. • V2G scheme is not a viable economic option due to high batteries investment cost. • Further incentives are necessary to make the business model attractive to car owners

  15. Singular Buildings on Regular Grids. Churches in L’Eixample and La Baixa

    Directory of Open Access Journals (Sweden)

    Alba Arboix-Alió

    2016-10-01

    Full Text Available A building is considered unique when it outstands within the common fabric of the city due to its form, its nature, and its production and serialization process. If this architectural singularity is accompanied by an urban distinction, the result is much more effective because the compound becomes an urban enclave capable of arranging and hierarchically organising the city. The most illustrative example for historic cities with a Catholic tradition may probably be the church with the public space that materializes around it. For centuries, the sacred building and the atrium that precedes it have represented the city’s reference point and articulating centre of social, economic and cultural life. Nevertheless, if this is more or less evident in old towns consolidated over time; how is this solved in modern cities formed by a regular urban layout whose grid is put before the freedom of the buildings? With Barcelona and Lisbon as case studies, the paper focuses on the implementation and typology of the most paradigmatic churches in the neighbourhoods of L’Eixample Cerdà and La Baixa Pombalina.

  16. Large-Scale Off-Target Identification Using Fast and Accurate Dual Regularized One-Class Collaborative Filtering and Its Application to Drug Repurposing.

    Directory of Open Access Journals (Sweden)

    Hansaim Lim

    2016-10-01

    Full Text Available Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing

  17. Large-Scale Off-Target Identification Using Fast and Accurate Dual Regularized One-Class Collaborative Filtering and Its Application to Drug Repurposing.

    Science.gov (United States)

    Lim, Hansaim; Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; He, Di; Zhuang, Luke; Meng, Patrick; Xie, Lei

    2016-10-01

    Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and

  18. A design of irregular grid map for large-scale Wi-Fi LAN fingerprint positioning systems.

    Science.gov (United States)

    Kim, Jae-Hoon; Min, Kyoung Sik; Yeo, Woon-Young

    2014-01-01

    The rapid growth of mobile communication and the proliferation of smartphones have drawn significant attention to location-based services (LBSs). One of the most important factors in the vitalization of LBSs is the accurate position estimation of a mobile device. The Wi-Fi positioning system (WPS) is a new positioning method that measures received signal strength indication (RSSI) data from all Wi-Fi access points (APs) and stores them in a large database as a form of radio fingerprint map. Because of the millions of APs in urban areas, radio fingerprints are seriously contaminated and confused. Moreover, the algorithmic advances for positioning face computational limitation. Therefore, we present a novel irregular grid structure and data analytics for efficient fingerprint map management. The usefulness of the proposed methodology is presented using the actual radio fingerprint measurements taken throughout Seoul, Korea.

  19. A Design of Irregular Grid Map for Large-Scale Wi-Fi LAN Fingerprint Positioning Systems

    Directory of Open Access Journals (Sweden)

    Jae-Hoon Kim

    2014-01-01

    Full Text Available The rapid growth of mobile communication and the proliferation of smartphones have drawn significant attention to location-based services (LBSs. One of the most important factors in the vitalization of LBSs is the accurate position estimation of a mobile device. The Wi-Fi positioning system (WPS is a new positioning method that measures received signal strength indication (RSSI data from all Wi-Fi access points (APs and stores them in a large database as a form of radio fingerprint map. Because of the millions of APs in urban areas, radio fingerprints are seriously contaminated and confused. Moreover, the algorithmic advances for positioning face computational limitation. Therefore, we present a novel irregular grid structure and data analytics for efficient fingerprint map management. The usefulness of the proposed methodology is presented using the actual radio fingerprint measurements taken throughout Seoul, Korea.

  20. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  1. Mapping spatial patterns of denitrifiers at large scales (Invited)

    Science.gov (United States)

    Philippot, L.; Ramette, A.; Saby, N.; Bru, D.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.

    2010-12-01

    Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 739 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.

  2. Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology

    Science.gov (United States)

    The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscal...

  3. OpenMP parallelization of a gridded SWAT (SWATG)

    Science.gov (United States)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  4. A novel algorithm for incompressible flow using only a coarse grid projection

    KAUST Repository

    Lentine, Michael; Zheng, Wen; Fedkiw, Ronald

    2010-01-01

    Large scale fluid simulation can be difficult using existing techniques due to the high computational cost of using large grids. We present a novel technique for simulating detailed fluids quickly. Our technique coarsens the Eulerian fluid grid

  5. Compounded effects of heat waves and droughts over the Western Electricity Grid: spatio-temporal scales of impacts and predictability toward mitigation and adaptation.

    Science.gov (United States)

    Voisin, N.; Kintner-Meyer, M.; Skaggs, R.; Xie, Y.; Wu, D.; Nguyen, T. B.; Fu, T.; Zhou, T.

    2016-12-01

    Heat waves and droughts are projected to be more frequent and intense. We have seen in the past the effects of each of those extreme climate events on electricity demand and constrained electricity generation, challenging power system operations. Our aim here is to understand the compounding effects under historical conditions. We present a benchmark of Western US grid performance under 55 years of historical climate, and including droughts, using 2010-level of water demand and water management infrastructure, and 2010-level of electricity grid infrastructure and operations. We leverage CMIP5 historical hydrology simulations and force a large scale river routing- reservoir model with 2010-level sectoral water demands. The regulated flow at each water-dependent generating plants is processed to adjust water-dependent electricity generation parameterization in a production cost model, that represents 2010-level power system operations with hourly energy demand of 2010. The resulting benchmark includes a risk distribution of several grid performance metrics (unserved energy, production cost, carbon emission) as a function of inter-annual variability in regional water availability and predictability using large scale climate oscillations. In the second part of the presentation, we describe an approach to map historical heat waves onto this benchmark grid performance using a building energy demand model. The impact of the heat waves, combined with the impact of droughts, is explored at multiple scales to understand the compounding effects. Vulnerabilities of the power generation and transmission systems are highlighted to guide future adaptation.

  6. Optimizing Electric Vehicle Coordination Over a Heterogeneous Mesh Network in a Scaled-Down Smart Grid Testbed

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Lévesque, Martin; Maier, Martin

    2015-01-01

    High penetration of renewable energy sources and electric vehicles (EVs) create power imbalance and congestion in the existing power network, and hence causes significant problems in the control and operation. Despite investing huge efforts from the electric utilities, governments, and researchers......, smart grid (SG) is still at the developmental stage to address those issues. In this regard, a smart grid testbed (SGT) is desirable to develop, analyze, and demonstrate various novel SG solutions, namely demand response, real-time pricing, and congestion management. In this paper, a novel SGT...... is developed in a laboratory by scaling a 250 kVA, 0.4 kV real low-voltage distribution feeder down to 1 kVA, 0.22 kV. Information and communication technology is integrated in the scaled-down network to establish real-time monitoring and control. The novelty of the developed testbed is demonstrated...

  7. Utility-based Reinforcement Learning for Reactive Grids

    OpenAIRE

    Perez , Julien; Germain-Renaud , Cécile; Kégl , Balázs; Loomis , C.

    2008-01-01

    International audience; Large scale production grids are an important case for autonomic computing. They follow a mutualization paradigm: decision-making (human or automatic) is distributed and largely independent, and, at the same time, it must implement the highlevel goals of the grid management. This paper deals with the scheduling problem with two partially conflicting goals: fairshare and Quality of Service (QoS). Fair sharing is a wellknown issue motivated by return on investment for pa...

  8. Lattice models for large-scale simulations of coherent wave scattering

    Science.gov (United States)

    Wang, Shumin; Teixeira, Fernando L.

    2004-01-01

    Lattice approximations for partial differential equations describing physical phenomena are commonly used for the numerical simulation of many problems otherwise intractable by pure analytical approaches. The discretization inevitably leads to many of the original symmetries to be broken or modified. In the case of Maxwell’s equations for example, invariance and isotropy of the speed of light in vacuum is invariably lost because of the so-called grid dispersion. Since it is a cumulative effect, grid dispersion is particularly harmful for the accuracy of results of large-scale simulations of scattering problems. Grid dispersion is usually combated by either increasing the lattice resolution or by employing higher-order schemes with larger stencils for the space and time derivatives. Both alternatives lead to increased computational cost to simulate a problem of a given physical size. Here, we introduce a general approach to develop lattice approximations with reduced grid dispersion error for a given stencil (and hence at no additional computational cost). The present approach is based on first obtaining stencil coefficients in the Fourier domain that minimize the maximum grid dispersion error for wave propagation at all directions (minimax sense). The resulting coefficients are then expanded into a Taylor series in terms of the frequency variable and incorporated into time-domain (update) equations after an inverse Fourier transformation. Maximally flat (Butterworth) or Chebyshev filters are subsequently used to minimize the wave speed variations for a given frequency range of interest. The use of such filters also allows for the adjustment of the grid dispersion characteristics so as to minimize not only the local dispersion error but also the accumulated phase error in a frequency range of interest.

  9. Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset

    Science.gov (United States)

    Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.

    2017-12-01

    Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.

  10. Large scale PV plants - also in Denmark. Project report

    Energy Technology Data Exchange (ETDEWEB)

    Ahm, P [PA Energy, Malling (Denmark); Vedde, J [SiCon. Silicon and PV consulting, Birkeroed (Denmark)

    2011-04-15

    Large scale PV (LPV) plants, plants with a capacity of more than 200 kW, has since 2007 constituted an increasing share of the global PV installations. In 2009 large scale PV plants with cumulative power more that 1,3 GWp were connected to the grid. The necessary design data for LPV plants in Denmark are available or can be found, although irradiance data could be improved. There seems to be very few institutional barriers for LPV projects, but as so far no real LPV projects have been processed, these findings have to be regarded as preliminary. The fast growing number of very large scale solar thermal plants for district heating applications supports these findings. It has further been investigated, how to optimize the lay-out of LPV plants. Under the Danish irradiance conditions with several winter months with very low solar height PV installations on flat surfaces will have to balance the requirements of physical space - and cost, and the loss of electricity production due to shadowing effects. The potential for LPV plants in Denmark are found in three main categories: PV installations on flat roof of large commercial buildings, PV installations on other large scale infrastructure such as noise barriers and ground mounted PV installations. The technical potential for all three categories is found to be significant and in the range of 50 - 250 km2. In terms of energy harvest PV plants will under Danish conditions exhibit an overall efficiency of about 10 % in conversion of the energy content of the light compared to about 0,3 % for biomass. The theoretical ground area needed to produce the present annual electricity consumption of Denmark at 33-35 TWh is about 300 km2 The Danish grid codes and the electricity safety regulations mention very little about PV and nothing about LPV plants. It is expected that LPV plants will be treated similarly to big wind turbines. A number of LPV plant scenarios have been investigated in detail based on real commercial offers and

  11. Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES

    Directory of Open Access Journals (Sweden)

    Zhongguang Fu

    2015-08-01

    Full Text Available As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled with current CAES technology. Moreover, a thermodynamic cycle system is optimized by calculating for the parameters of a thermodynamic system. Results show that the thermal efficiency of the new system increases by at least 5% over that of the existing system.

  12. Impact of network topology on synchrony of oscillatory power grids

    Energy Technology Data Exchange (ETDEWEB)

    Rohden, Martin; Sorge, Andreas; Witthaut, Dirk [Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany); Timme, Marc [Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany); Faculty of Physics, Georg August Universität Göttingen, Göttingen (Germany)

    2014-03-15

    Replacing conventional power sources by renewable sources in current power grids drastically alters their structure and functionality. In particular, power generation in the resulting grid will be far more decentralized, with a distinctly different topology. Here, we analyze the impact of grid topologies on spontaneous synchronization, considering regular, random, and small-world topologies and focusing on the influence of decentralization. We model the consumers and sources of the power grid as second order oscillators. First, we analyze the global dynamics of the simplest non-trivial (two-node) network that exhibit a synchronous (normal operation) state, a limit cycle (power outage), and coexistence of both. Second, we estimate stability thresholds for the collective dynamics of small network motifs, in particular, star-like networks and regular grid motifs. For larger networks, we numerically investigate decentralization scenarios finding that decentralization itself may support power grids in exhibiting a stable state for lower transmission line capacities. Decentralization may thus be beneficial for power grids, regardless of the details of their resulting topology. Regular grids show a specific sharper transition not found for random or small-world grids.

  13. REVIEW ON GRID INTERFACING OF MULTIMEGAWATT PHOTOVOLTAIC INVERTERS

    OpenAIRE

    Mr. Vilas S. Solanke*; Mr. Naveen Kumar

    2016-01-01

    This paper presents review on the latest development of control of grid connected photovoltaic energy conversion system. Also this paper present existing systems control algorithm for three-phase and single phase grid-connected photovoltaic (PV) system. This paper focuses on one aspect of solar energy, namely grid interfacing of large-scale PV farms. This Grid-connected photovoltaic i.e. PV systems can provide a number of benefits to electric utilities, such as power loss reduction, improve...

  14. FY 1998 Report on development of large-scale wind power generation systems. Feasibility study on development of new technologies for wind power generation (Study on the development of wind power generation systems for small-scale power grids); 1998 nendo ogata furyoku hatsuden system kaihatsu seika hokokusho. Furyoku hatsuden shingijutsu kaihatsu kanosei chosa (shokibo keito ni okeru furyoku hatsuden system ni kansuru chosa)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    This survey includes the characteristics of small-scale power grids, feasibility studies on introduction of wind turbines in these grids, and statuses of application of wind turbines to isolated islands or the like in the advanced countries, in order to promote introduction of wind power generation systems in isolated islands or the like. It is concluded that small-capacity wind power generation systems can be possibly introduced in the intermediate- to large-scale grids in isolated islands, 1,500kW or larger in capacity, in the Tokyo, Kyushu and Okinawa Electric Power Companies' areas. A scheduled steamer ship for isolated islands can carry up to 10 ton track, and introduction of a small-scale wind turbine is more advantageous viewed from the transportation cost. Some foreign countries have the sites which have achieved a high percentage of grid connection of wind power units by stabilizing wind conditions and connecting them to the main high-voltage grids in different manners from those adopted in Japan. For developing wind turbine bodies, most of the foreign countries surveyed are concentrating their efforts on development and manufacture of large-size units, paying little attention on development of small-size wind turbines for isolated islands. For the future prospects, the promising concepts include adoption of wind turbines small in capacity and easy to transport and assemble, and hybrid systems combined with power storage units. (NEDO)

  15. Steering the Smart Grid

    NARCIS (Netherlands)

    Molderink, Albert; Bakker, Vincent; Bosman, M.G.C.; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    Increasing energy prices and the greenhouse effect lead to more awareness of energy efficiency of electricity supply. During the last years, a lot of technologies and optimization methodologies were developed to increase the efficiency, maintain the grid stability and support large scale

  16. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  17. SPATIALLY RESOLVED SPECTROSCOPY OF EUROPA’S LARGE-SCALE COMPOSITIONAL UNITS AT 3–4 μ m WITH KECK NIRSPEC

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, P. D.; Brown, M. E.; Trumbo, S. K. [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States); Hand, K. P., E-mail: pfischer@caltech.edu [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States)

    2017-01-01

    We present spatially resolved spectroscopic observations of Europa’s surface at 3–4 μ m obtained with the near-infrared spectrograph and adaptive optics system on the Keck II telescope. These are the highest quality spatially resolved reflectance spectra of Europa’s surface at 3–4 μ m. The observations spatially resolve Europa’s large-scale compositional units at a resolution of several hundred kilometers. The spectra show distinct features and geographic variations associated with known compositional units; in particular, large-scale leading hemisphere chaos shows a characteristic longward shift in peak reflectance near 3.7 μ m compared to icy regions. These observations complement previous spectra of large-scale chaos, and can aid efforts to identify the endogenous non-ice species.

  18. Electricity network limitations on large-scale deployment of wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Fairbairn, R.J.

    1999-07-01

    This report sought to identify limitation on large scale deployment of wind energy in the UK. A description of the existing electricity supply system in England, Scotland and Wales is given, and operational aspects of the integrated electricity networks, licence conditions, types of wind turbine generators, and the scope for deployment of wind energy in the UK are addressed. A review of technical limitations and technical criteria stipulated by the Distribution and Grid Codes, the effects of system losses, and commercial issues are examined. Potential solutions to technical limitations are proposed, and recommendations are outlined.

  19. Explosive force of primacord grid forms large sheet metal parts

    Science.gov (United States)

    1966-01-01

    Primacord which is woven through fish netting in a grid pattern is used for explosive forming of large sheet metal parts. The explosive force generated by the primacord detonation is uniformly distributed over the entire surface of the sheet metal workpiece.

  20. General Forced Oscillations in a Real Power Grid Integrated with Large Scale Wind Power

    Directory of Open Access Journals (Sweden)

    Ping Ju

    2016-07-01

    Full Text Available According to the monitoring of the wide area measurement system, inter-area oscillations happen more and more frequently in a real power grid of China, which are close to the forced oscillation. Applying the conventional forced oscillation theory, the mechanism of these oscillations cannot be explained well, because the oscillations vary with random amplitude and a narrow frequency band. To explain the mechanism of such oscillations, the general forced oscillation (GFO mechanism is taken into consideration. The GFO is the power system oscillation excited by the random excitations, such as power fluctuations from renewable power generation. Firstly, properties of the oscillations observed in the real power grid are analyzed. Using the GFO mechanism, the observed oscillations seem to be the GFO caused by some random excitation. Then the variation of the wind power measured in this power gird is found to be the random excitation which may cause the GFO phenomenon. Finally, simulations are carried out and the power spectral density of the simulated oscillation is compared to that of the observed oscillation, and they are similar with each other. The observed oscillation is thus explained well using the GFO mechanism and the GFO phenomenon has now been observed for the first time in real power grids.

  1. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  2. FDTD method for laser absorption in metals for large scale problems.

    Science.gov (United States)

    Deng, Chun; Ki, Hyungson

    2013-10-21

    The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.

  3. Differences between downscaling with spectral and grid nudging using WRF

    Directory of Open Access Journals (Sweden)

    P. Liu

    2012-04-01

    Full Text Available Dynamical downscaling has been extensively used to study regional climate forced by large-scale global climate models. During the downscaling process, however, the simulation of regional climate models (RCMs tends to drift away from the driving fields. Developing a solution that addresses this issue, by retaining the large scale features (from the large-scale fields and the small-scale features (from the RCMs has led to the development of "nudging" techniques. Here, we examine the performance of two nudging techniques, grid and spectral nudging, in the downscaling of NCEP/NCAR data with the Weather Research and Forecasting (WRF Model. The simulations are compared against the results with North America Regional Reanalysis (NARR data set at different scales of interest using the concept of similarity. We show that with the appropriate choice of wave numbers, spectral nudging outperforms grid nudging in the capacity of balancing the performance of simulation at the large and small scales.

  4. Integrating Grid Services into the Cray XT4 Environment

    OpenAIRE

    Cholia, Shreyas

    2009-01-01

    The 38640 core Cray XT4 "Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a share...

  5. Observing the Cosmic Microwave Background Polarization with Variable-delay Polarization Modulators for the Cosmology Large Angular Scale Surveyor

    Science.gov (United States)

    Harrington, Kathleen; CLASS Collaboration

    2018-01-01

    The search for inflationary primordial gravitational waves and the optical depth to reionization, both through their imprint on the large angular scale correlations in the polarization of the cosmic microwave background (CMB), has created the need for high sensitivity measurements of polarization across large fractions of the sky at millimeter wavelengths. These measurements are subjected to instrumental and atmospheric 1/f noise, which has motivated the development of polarization modulators to facilitate the rejection of these large systematic effects.Variable-delay polarization modulators (VPMs) are used in the Cosmology Large Angular Scale Surveyor (CLASS) telescopes as the first element in the optical chain to rapidly modulate the incoming polarization. VPMs consist of a linearly polarizing wire grid in front of a moveable flat mirror; varying the distance between the grid and the mirror produces a changing phase shift between polarization states parallel and perpendicular to the grid which modulates Stokes U (linear polarization at 45°) and Stokes V (circular polarization). The reflective and scalable nature of the VPM enables its placement as the first optical element in a reflecting telescope. This simultaneously allows a lock-in style polarization measurement and the separation of sky polarization from any instrumental polarization farther along in the optical chain.The Q-Band CLASS VPM was the first VPM to begin observing the CMB full time in 2016. I will be presenting its design and characterization as well as demonstrating how modulating polarization significantly rejects atmospheric and instrumental long time scale noise.

  6. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  7. The progresses of superconducting technology for power grid last decade in China

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Liye; Gu, Hong Wei [Applied Superconductivity Laboratory, Chinese Academy of Sciences, Beijing (China)

    2015-03-15

    With the increasing development of renewable energy, it is expected that large-scale renewable power would be transported from the west and north area of China to the east and south area. For this reason, it will be necessary to develop a wide-area power grid in which the renewable energy would be the dominant power source, and the power grid will be faced by some critical challenges such as long-distance large-capacity power transmission, the stability of the wide-area power grid and the land use problem for the power grid. The superconducting technology for power (STP) would be a possible alternative for the development of China’s future power grid. In last decade, STP has been extensively developed in China. In this paper, we present an overview of the R and D of STP last decade in China including: 1) the development of high temperature superconducting (HTS) materials, 2) DC power cables, 3) superconducting power substations, 4) fault current limiters and 5) superconducting magnetic energy storage (SMES)

  8. The progresses of superconducting technology for power grid last decade in China

    International Nuclear Information System (INIS)

    Xiao, Liye; Gu, Hong Wei

    2015-01-01

    With the increasing development of renewable energy, it is expected that large-scale renewable power would be transported from the west and north area of China to the east and south area. For this reason, it will be necessary to develop a wide-area power grid in which the renewable energy would be the dominant power source, and the power grid will be faced by some critical challenges such as long-distance large-capacity power transmission, the stability of the wide-area power grid and the land use problem for the power grid. The superconducting technology for power (STP) would be a possible alternative for the development of China’s future power grid. In last decade, STP has been extensively developed in China. In this paper, we present an overview of the R and D of STP last decade in China including: 1) the development of high temperature superconducting (HTS) materials, 2) DC power cables, 3) superconducting power substations, 4) fault current limiters and 5) superconducting magnetic energy storage (SMES)

  9. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, Christian [Monash Univ., Melbourne, VIC (Australia)

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  10. Expanded Large-Scale Forcing Properties Derived from the Multiscale Data Assimilation System and Its Application to Single-Column Models

    Science.gov (United States)

    Feng, S.; Li, Z.; Liu, Y.; Lin, W.; Toto, T.; Vogelmann, A. M.; Fridlind, A. M.

    2013-12-01

    We present an approach to derive large-scale forcing that is used to drive single-column models (SCMs) and cloud resolving models (CRMs)/large eddy simulation (LES) for evaluating fast physics parameterizations in climate models. The forcing fields are derived by use of a newly developed multi-scale data assimilation (MS-DA) system. This DA system is developed on top of the NCEP Gridpoint Statistical Interpolation (GSI) System and is implemented in the Weather Research and Forecasting (WRF) model at a cloud resolving resolution of 2 km. This approach has been applied to the generation of large scale forcing for a set of Intensive Operation Periods (IOPs) over the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plains (SGP) site. The dense ARM in-situ observations and high-resolution satellite data effectively constrain the WRF model. The evaluation shows that the derived forcing displays accuracies comparable to the existing continuous forcing product and, overall, a better dynamic consistency with observed cloud and precipitation. One important application of this approach is to derive large-scale hydrometeor forcing and multiscale forcing, which is not provided in the existing continuous forcing product. It is shown that the hydrometeor forcing poses an appreciable impact on cloud and precipitation fields in the single-column model simulations. The large-scale forcing exhibits a significant dependency on domain-size that represents SCM grid-sizes. Subgrid processes often contribute a significant component to the large-scale forcing, and this contribution is sensitive to the grid-size and cloud-regime.

  11. A scenario of vehicle-to-grid implementation and its double-layer optimal charging strategy for minimizing load variance within regional smart grids

    International Nuclear Information System (INIS)

    Jian, Linni; Zhu, Xinyu; Shao, Ziyun; Niu, Shuangxia; Chan, C.C.

    2014-01-01

    Highlights: • A scenario of vehicle-to-grid implementation within regional smart grid is discussed and mathematically formulated. • A double-layer optimal charging strategy for plug-in electric vehicles is proposed. • The proposed double-layer optimal charging algorithm aims to minimize power grid’s load variance. • The performance of proposed double-layer optimal charging algorithm is evaluated through comparative study. - Abstract: As an emerging new electrical load, plug-in electric vehicles (PEVs)’ impact on the power grid has drawn increasing attention worldwide. An optimal scenario is that by digging the potential of PEVs as a moveable energy storage device, they may not harm the power grid by, for example, triggering extreme surges in demand at rush hours, conversely, the large-scale penetration of PEVs could benefit the grid through flattening the power load curve, hence, increase the stability, security and operating economy of the grid. This has become a hot issue which is known as vehicle-to-grid (V2G) technology within the framework of smart grid. In this paper, a scenario of V2G implementation within regional smart grids is discussed. Then, the problem is mathematically formulated. It is essentially an optimization problem, and the objective is to minimize the overall load variance. With the increase of the scale of PEVs and charging posts involved, the computational complexity will become tremendously high. Therefore, a double-layer optimal charging (DLOC) strategy is proposed to solve this problem. The comparative study demonstrates that the proposed DLOC algorithm can effectively solve the problem of tremendously high computational complexity arising from the large-scaled PEVs and charging posts involved

  12. Screening wells by multi-scale grids for multi-stage Markov Chain Monte Carlo simulation

    DEFF Research Database (Denmark)

    Akbari, Hani; Engsig-Karup, Allan Peter

    2018-01-01

    /production wells, aiming at accurate breakthrough capturing as well as above mentioned efficiency goals. However this short time simulation needs fine-scale structure of the geological model around wells and running a fine-scale model is not as cheap as necessary for screening steps. On the other hand applying...... it on a coarse-scale model declines important data around wells and causes inaccurate results, particularly accurate breakthrough capturing which is important for prediction applications. Therefore we propose a multi-scale grid which preserves the fine-scale model around wells (as well as high permeable regions...... and fractures) and coarsens rest of the field and keeps efficiency and accuracy for the screening well stage and coarse-scale simulation, as well. A discrete wavelet transform is used as a powerful tool to generate the desired unstructured multi-scale grid efficiently. Finally an accepted proposal on coarse...

  13. Properties and uses of storage for enhancing the grid penetration of very large photovoltaic systems

    International Nuclear Information System (INIS)

    Solomon, A.A.; Faiman, D.; Meron, G.

    2010-01-01

    In this third paper, which studies the hourly generation data for the year 2006 from the Israel Electric Corporation, with a view to incorporating very large photovoltaic (PV) power plants, we address the question: What properties should storage have in order to enhance the grid penetration of large PV systems in an efficient and substantial manner? We first impose the constraint that no PV energy losses are permitted other than those due to storage inefficiency. This constraint leads to powerful linkages between the energy capacity and power capacity of storage, and PV system size, and their combined effect on grid penetration. Various strategies are then examined for enhancing grid penetration, based upon this newfound knowledge. Specific strategies examined include PV energy dumping and baseload rescheduling both on a seasonal basis and shorter time periods. We found, inter alia, that at high grid flexibilities (in the range ff=0.8-1), PV grid penetration levels could be possible in the range 60-90% of annual requirements. Moreover, with appropriately designed storage and accurate forecasting, a future grid could be operated at ff=1.

  14. Configuration monitoring tool for large-scale distributed computing

    International Nuclear Information System (INIS)

    Wu, Y.; Graham, G.; Lu, X.; Afaq, A.; Kim, B.J.; Fisk, I.

    2004-01-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN will likely use a grid system to achieve much of its offline processing need. Given the heterogeneous and dynamic nature of grid systems, it is desirable to have in place a configuration monitor. The configuration monitoring tool is built using the Globus toolkit and web services. It consists of an information provider for the Globus MDS, a relational database for keeping track of the current and old configurations, and client interfaces to query and administer the configuration system. The Grid Security Infrastructure (GSI), together with EDG Java Security packages, are used for secure authentication and transparent access to the configuration information across the CMS grid. This work has been prototyped and tested using US-CMS grid resources

  15. Configuration monitoring tool for large-scale distributed computing

    CERN Document Server

    Wu, Y; Fisk, I; Graham, G; Kim, B J; Lü, X

    2004-01-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN will likely use a grid system to achieve much of its offline processing need. Given the heterogeneous and dynamic nature of grid systems, it is desirable to have in place a configuration monitor. The configuration monitoring tool is built using the Globus toolkit and web services. It consists of an information provider for the Globus MDS, a relational database for keeping track of the current and old configurations, and client interfaces to query and administer the configuration system. The Grid Security Infrastructure (GSI), together with EDG Java Security packages, are used for secure authentication and transparent access to the configuration information across the CMS grid. This work has been prototyped and tested using US-CMS grid resources.

  16. Scaled lattice fermion fields, stability bounds, and regularity

    Science.gov (United States)

    O'Carroll, Michael; Faria da Veiga, Paulo A.

    2018-02-01

    We consider locally gauge-invariant lattice quantum field theory models with locally scaled Wilson-Fermi fields in d = 1, 2, 3, 4 spacetime dimensions. The use of scaled fermions preserves Osterwalder-Seiler positivity and the spectral content of the models (the decay rates of correlations are unchanged in the infinite lattice). In addition, it also results in less singular, more regular behavior in the continuum limit. Precisely, we treat general fermionic gauge and purely fermionic lattice models in an imaginary-time functional integral formulation. Starting with a hypercubic finite lattice Λ ⊂(aZ ) d, a ∈ (0, 1], and considering the partition function of non-Abelian and Abelian gauge models (the free fermion case is included) neglecting the pure gauge interactions, we obtain stability bounds uniformly in the lattice spacing a ∈ (0, 1]. These bounds imply, at least in the subsequential sense, the existence of the thermodynamic (Λ ↗ (aZ ) d) and the continuum (a ↘ 0) limits. Specializing to the U(1) gauge group, the known non-intersecting loop expansion for the d = 2 partition function is extended to d = 3 and the thermodynamic limit of the free energy is shown to exist with a bound independent of a ∈ (0, 1]. In the case of scaled free Fermi fields (corresponding to a trivial gauge group with only the identity element), spectral representations are obtained for the partition function, free energy, and correlations. The thermodynamic and continuum limits of the free fermion free energy are shown to exist. The thermodynamic limit of n-point correlations also exist with bounds independent of the point locations and a ∈ (0, 1], and with no n! dependence. Also, a time-zero Hilbert-Fock space is constructed, as well as time-zero, spatially pointwise scaled fermion creation operators which are shown to be norm bounded uniformly in a ∈ (0, 1]. The use of our scaled fields since the beginning allows us to extract and isolate the singularities of the free

  17. Parallel Implementation of the Multi-Dimensional Spectral Code SPECT3D on large 3D grids.

    Science.gov (United States)

    Golovkin, Igor E.; Macfarlane, Joseph J.; Woodruff, Pamela R.; Pereyra, Nicolas A.

    2006-10-01

    The multi-dimensional collisional-radiative, spectral analysis code SPECT3D can be used to study radiation from complex plasmas. SPECT3D can generate instantaneous and time-gated images and spectra, space-resolved and streaked spectra, which makes it a valuable tool for post-processing hydrodynamics calculations and direct comparison between simulations and experimental data. On large three dimensional grids, transporting radiation along lines of sight (LOS) requires substantial memory and CPU resources. Currently, the parallel option in SPECT3D is based on parallelization over photon frequencies and allows for a nearly linear speed-up for a variety of problems. In addition, we are introducing a new parallel mechanism that will greatly reduce memory requirements. In the new implementation, spatial domain decomposition will be utilized allowing transport along a LOS to be performed only on the mesh cells the LOS crosses. The ability to operate on a fraction of the grid is crucial for post-processing the results of large-scale three-dimensional hydrodynamics simulations. We will present a parallel implementation of the code and provide a scalability study performed on a Linux cluster.

  18. Scalability tests of R-GMA based Grid job monitoring system for CMS Monte Carlo data production

    CERN Document Server

    Bonacorsi, D; Field, L; Fisher, S; Grandi, C; Hobson, P R; Kyberd, P; MacEvoy, B; Nebrensky, J J; Tallini, H; Traylen, S

    2004-01-01

    High Energy Physics experiments such as CMS (Compact Muon Solenoid) at the Large Hadron Collider have unprecedented, large-scale data processing computing requirements, with data accumulating at around 1 Gbyte/s. The Grid distributed computing paradigm has been chosen as the solution to provide the requisite computing power. The demanding nature of CMS software and computing requirements, such as the production of large quantities of Monte Carlo simulated data, makes them an ideal test case for the Grid and a major driver for the development of Grid technologies. One important challenge when using the Grid for large-scale data analysis is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. In this paper we report on the first measurements of R-GMA as part of a monitoring architecture to be used for b...

  19. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, Alexey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  20. Analysis of Information Quality in event triggered Smart Grid Control

    DEFF Research Database (Denmark)

    Kristensen, Thomas le Fevre; Olsen, Rasmus Løvenstein; Rasmussen, Jakob Gulddahl

    2015-01-01

    The integration of renewable energy sources into the power grid requires added control intelligence which imposes new communication requirements onto the future power grid. Since large scale implementation of new communication infrastructure is infeasible, we consider methods of increasing...

  1. Large scale and cloud-based multi-model analytics experiments on climate change data in the Earth System Grid Federation

    Science.gov (United States)

    Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni

    2017-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final

  2. Correction: Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4

    DEFF Research Database (Denmark)

    Jensen, Søren Højgaard; Graves, Christopher R.; Mogensen, Mogens Bjerg

    2017-01-01

    Correction for ‘Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4’ by S. H. Jensen et al., Energy Environ. Sci., 2015, 8, 2471–2479.......Correction for ‘Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4’ by S. H. Jensen et al., Energy Environ. Sci., 2015, 8, 2471–2479....

  3. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  4. AC HTS Transmission Cable for Integration into the Future EHV Grid of the Netherlands

    OpenAIRE

    Zuijderduin, R.; Chevtchenko, O.; Smit, J.J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    2012-01-01

    Due to increasing power demand, the electricity grid of the Netherlands is changing. The future grid must be capable to transmit all the connected power. Power generation will be more decentralized like for instance wind parks connected to the grid. Furthermore, future large scale production units are expected to be installed near coastal regions. This creates some potential grid issues, such as: large power amounts to be transmitted to consumers from west to east and grid stability. High tem...

  5. Grids, Clouds and Virtualization

    CERN Document Server

    Cafaro, Massimo

    2011-01-01

    Research into grid computing has been driven by the need to solve large-scale, increasingly complex problems for scientific applications. Yet the applications of grid computing for business and casual users did not begin to emerge until the development of the concept of cloud computing, fueled by advances in virtualization techniques, coupled with the increased availability of ever-greater Internet bandwidth. The appeal of this new paradigm is mainly based on its simplicity, and the affordable price for seamless access to both computational and storage resources. This timely text/reference int

  6. Grid computing in large pharmaceutical molecular modeling.

    Science.gov (United States)

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  7. Increased Productivity for Emerging Grid Applications the Application Support System

    CERN Document Server

    Maier, Andrew; Mendez Lorenzo, Patricia; Moscicki, Jakub; Lamanna, Massimo; Muraru, Adrian

    2008-01-01

    Recently a growing number of various applications have been quickly and successfully enabled on the Grid by the CERN Grid application support team. This allowed the applications to achieve and publish large-scale results in a short time which otherwise would not be possible. We present the general infrastructure, support procedures and tools that have been developed. We discuss the general patterns observed in supporting new applications and porting them to the EGEE environment. The CERN Grid application support team has been working with the following real-life applications: medical and particle physics simulation (Geant4, Garfield), satellite imaging and geographic information for humanitarian relief operations (UNOSAT), telecommunications (ITU), theoretical physics (Lattice QCD, Feynman-loop evaluation), Bio-informatics (Avian Flu Data Challenge), commercial imaging processing and classification (Imense Ltd.) and physics experiments (ATLAS, LHCb, HARP). Using the EGEE Grid we created a standard infrastruct...

  8. On temperature spectra in grid turbulence

    International Nuclear Information System (INIS)

    Jayesh; Tong, C.; Warhaft, Z.

    1994-01-01

    This paper reports wind tunnel measurements of passive temperature spectra in decaying grid generated turbulence both with and without a mean transverse temperature gradient. The measurements cover a turbulence Reynolds number range 60 l 3/4 l . The remarkably low Reynolds number onset (Re l ∼70) of Kolmogorov--Obukhov--Corrsin scaling in isotropic grid turbulence is contrasted to the case of scalars in (anisotropic) shear flows where KOC scaling only appears at very high-Reynolds numbers (Re l ∼10 5 ). It is also shown that when the temperature fluctuations are inserted very close to the grid in the absence of a gradient (by means of a mandoline), the temperature spectrum behaves in a similar way to the linear gradient case, i.e., a spectrum with a scaling exponent close to -5/3 is observed, a result noted earlier in heated grid experiments. However, when the scalar is inserted farther downstream of the grid (in the fully developed turbulence), the spectrum has a scaling region of -1.3 and its dilation with Re is less well defined than for the other cases. The velocity spectrum is also shown to have a scaling region, of slope -1.3, and its onset occurs at higher Reynolds number than for the case of the scalar experiments that exhibit the KOC scaling

  9. Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor

    2012-03-01

    We consider the sparse inverse covariance regularization problem or graphical lasso with regularization parameter λ. Suppose the sample covariance graph formed by thresholding the entries of the sample covariance matrix at λ is decomposed into connected components. We show that the vertex-partition induced by the connected components of the thresholded sample covariance graph (at λ) is exactly equal to that induced by the connected components of the estimated concentration graph, obtained by solving the graphical lasso problem for the same λ. This characterizes a very interesting property of a path of graphical lasso solutions. Furthermore, this simple rule, when used as a wrapper around existing algorithms for the graphical lasso, leads to enormous performance gains. For a range of values of λ, our proposal splits a large graphical lasso problem into smaller tractable problems, making it possible to solve an otherwise infeasible large-scale problem. We illustrate the graceful scalability of our proposal via synthetic and real-life microarray examples.

  10. Smart Grids. First results from French demonstrators - Summary

    International Nuclear Information System (INIS)

    Bertholon, Marion; Kerouedan, Anne-Fleur; Regner, Martin

    2016-10-01

    Since 2009, ADEME has played a key role in supporting the structuring of the smart grid sector. The Agency has helped to fund the first large-scale projects through the Investments for the Future Programme (PIA) steered by the General Commissariat for Investment (GCI). This summary tackles four fundamental themes based on the experience from the 12 smart grid projects the most mature end 2015: - promote demand-side management and load shedding; - favour the insertion of renewable energy; - anticipate the evolution of existing grids; - prefigure business models of smart grids solutions

  11. The AgMIP GRIDded Crop Modeling Initiative (AgGRID) and the Global Gridded Crop Model Intercomparison (GGCMI)

    Science.gov (United States)

    Elliott, Joshua; Muller, Christoff

    2015-01-01

    Climate change is a significant risk for agricultural production. Even under optimistic scenarios for climate mitigation action, present-day agricultural areas are likely to face significant increases in temperatures in the coming decades, in addition to changes in precipitation, cloud cover, and the frequency and duration of extreme heat, drought, and flood events (IPCC, 2013). These factors will affect the agricultural system at the global scale by impacting cultivation regimes, prices, trade, and food security (Nelson et al., 2014a). Global-scale evaluation of crop productivity is a major challenge for climate impact and adaptation assessment. Rigorous global assessments that are able to inform planning and policy will benefit from consistent use of models, input data, and assumptions across regions and time that use mutually agreed protocols designed by the modeling community. To ensure this consistency, large-scale assessments are typically performed on uniform spatial grids, with spatial resolution of typically 10 to 50 km, over specified time-periods. Many distinct crop models and model types have been applied on the global scale to assess productivity and climate impacts, often with very different results (Rosenzweig et al., 2014). These models are based to a large extent on field-scale crop process or ecosystems models and they typically require resolved data on weather, environmental, and farm management conditions that are lacking in many regions (Bondeau et al., 2007; Drewniak et al., 2013; Elliott et al., 2014b; Gueneau et al., 2012; Jones et al., 2003; Liu et al., 2007; M¨uller and Robertson, 2014; Van den Hoof et al., 2011;Waha et al., 2012; Xiong et al., 2014). Due to data limitations, the requirements of consistency, and the computational and practical limitations of running models on a large scale, a variety of simplifying assumptions must generally be made regarding prevailing management strategies on the grid scale in both the baseline and

  12. Multigrid preconditioned conjugate-gradient method for large-scale wave-front reconstruction.

    Science.gov (United States)

    Gilles, Luc; Vogel, Curtis R; Ellerbroek, Brent L

    2002-09-01

    We introduce a multigrid preconditioned conjugate-gradient (MGCG) iterative scheme for computing open-loop wave-front reconstructors for extreme adaptive optics systems. We present numerical simulations for a 17-m class telescope with n = 48756 sensor measurement grid points within the aperture, which indicate that our MGCG method has a rapid convergence rate for a wide range of subaperture average slope measurement signal-to-noise ratios. The total computational cost is of order n log n. Hence our scheme provides for fast wave-front simulation and control in large-scale adaptive optics systems.

  13. Quantum cosmological origin of large scale structures of the universe

    International Nuclear Information System (INIS)

    Anini, Y.

    1989-07-01

    In this paper, the initial quantum state of matter perturbations about de Sitter minisuperspace model is found. For a large class of boundary conditions (bcs), including those of Hartle-Hawking and Vilenkin, the resulting quantum state is the de Sitter invariant vacuum. This result is found to depend only on the regularity requirement at the euclidean origin of spacetime which is common to all reasonable (bcs). The initial value of the density perturbations implied by these quantum fluctuations are found and evaluated at the initial horizon crossing. The perturbations are found to have an almost scale independent spectrum, and an amplitude which depends on the scale at which inflation took place. The amplitude would have the right value if the scale of inflation is H ≤ 10 15 Gev. (author). 9 refs

  14. Transmission Technologies and Operational Characteristic Analysis of Hybrid UHV AC/DC Power Grids in China

    Science.gov (United States)

    Tian, Zhang; Yanfeng, Gong

    2017-05-01

    In order to solve the contradiction between demand and distribution range of primary energy resource, Ultra High Voltage (UHV) power grids should be developed rapidly to meet development of energy bases and accessing of large-scale renewable energy. This paper reviewed the latest research processes of AC/DC transmission technologies, summarized the characteristics of AC/DC power grids, concluded that China’s power grids certainly enter a new period of large -scale hybrid UHV AC/DC power grids and characteristics of “strong DC and weak AC” becomes increasingly pro minent; possible problems in operation of AC/DC power grids was discussed, and interaction or effect between AC/DC power grids was made an intensive study of; according to above problems in operation of power grids, preliminary scheme is summarized as fo llows: strengthening backbone structures, enhancing AC/DC transmission technologies, promoting protection measures of clean energ y accessing grids, and taking actions to solve stability problems of voltage and frequency etc. It’s valuable for making hybrid UHV AC/DC power grids adapt to operating mode of large power grids, thus guaranteeing security and stability of power system.

  15. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  16. Efficient Pseudorecursive Evaluation Schemes for Non-adaptive Sparse Grids

    KAUST Repository

    Buse, Gerrit

    2014-01-01

    In this work we propose novel algorithms for storing and evaluating sparse grid functions, operating on regular (not spatially adaptive), yet potentially dimensionally adaptive grid types. Besides regular sparse grids our approach includes truncated grids, both with and without boundary grid points. Similar to the implicit data structures proposed in Feuersänger (Dünngitterverfahren für hochdimensionale elliptische partielle Differntialgleichungen. Diploma Thesis, Institut für Numerische Simulation, Universität Bonn, 2005) and Murarasu et al. (Proceedings of the 16th ACM Symposium on Principles and Practice of Parallel Programming. Cambridge University Press, New York, 2011, pp. 25–34) we also define a bijective mapping from the multi-dimensional space of grid points to a contiguous index, such that the grid data can be stored in a simple array without overhead. Our approach is especially well-suited to exploit all levels of current commodity hardware, including cache-levels and vector extensions. Furthermore, this kind of data structure is extremely attractive for today’s real-time applications, as it gives direct access to the hierarchical structure of the grids, while outperforming other common sparse grid structures (hash maps, etc.) which do not match with modern compute platforms that well. For dimensionality d ≤ 10 we achieve good speedups on a 12 core Intel Westmere-EP NUMA platform compared to the results presented in Murarasu et al. (Proceedings of the International Conference on Computational Science—ICCS 2012. Procedia Computer Science, 2012). As we show, this also holds for the results obtained on Nvidia Fermi GPUs, for which we observe speedups over our own CPU implementation of up to 4.5 when dealing with moderate dimensionality. In high-dimensional settings, in the order of tens to hundreds of dimensions, our sparse grid evaluation kernels on the CPU outperform any other known implementation.

  17. The GridShare solution: a smart grid approach to improve service provision on a renewable energy mini-grid in Bhutan

    International Nuclear Information System (INIS)

    Quetchenbach, T G; Harper, M J; Jacobson, A E; Robinson IV, J; Hervin, K K; Chase, N A; Dorji, C

    2013-01-01

    This letter reports on the design and pilot installation of GridShares, devices intended to alleviate brownouts caused by peak power use on isolated, village-scale mini-grids. A team consisting of the authors and partner organizations designed, built and field-tested GridShares in the village of Rukubji, Bhutan. The GridShare takes an innovative approach to reducing brownouts by using a low cost device that communicates the state of the grid to its users and regulates usage before severe brownouts occur. This demand-side solution encourages users to distribute the use of large appliances more evenly throughout the day, allowing power-limited systems to provide reliable, long-term renewable electricity to these communities. In the summer of 2011, GridShares were installed in every household and business connected to the Rukubji micro-hydro mini-grid, which serves approximately 90 households with a 40 kW nominal capacity micro-hydro system. The installation was accompanied by an extensive education program. Following the installation of the GridShares, the occurrence and average length of severe brownouts, which had been caused primarily by the use of electric cooking appliances during meal preparation, decreased by over 92%. Additionally, the majority of residents surveyed stated that now they are more certain that their rice will cook well and that they would recommend installing GridShares in other villages facing similar problems. (letter)

  18. Research on the Method of Urban Waterlogging Flood Routing Based on Hexagonal Grid

    Directory of Open Access Journals (Sweden)

    LAI Guangling

    2016-12-01

    Full Text Available An evolution of the urban waterlogging flood routing was studied in this paper based on the method of hexagonal grid modeling. Using the method of discrete grid, established an urban geometry model on account of the regular multi-scale discrete grid. With the fusion of 3D topographic survey data and 2D building vector data, formed a regular network model of surface. This model took multi special block into account, such as urban terrain and buildings. On this basis, a method of reverse flow deduction was proposed, which was an inverse computation from the state of flood to the evolution process. That is, based on the water depth of flood, made use of the connectivity with the outfall to calculate the range of water logging, and then implemented the urban waterlogging flood simulation deduction. The test indicated that, this method can implement the evolution of urban waterlogging scenario deduction effectively. And the correlational research could provide scientific basis for urban disaster prevention and emergency decision-making.

  19. {sup 10}B multi-grid proportional gas counters for large area thermal neutron detectors

    Energy Technology Data Exchange (ETDEWEB)

    Andersen, K. [ESS, P.O. Box 176, SE-221 00 Lund (Sweden); Bigault, T. [ILL, BP 156, 6, rue Jules Horowitz, 38042 Grenoble Cedex 9 (France); Birch, J. [Linköping University, SE-581, 83 Linköping (Sweden); Buffet, J. C.; Correa, J. [ILL, BP 156, 6, rue Jules Horowitz, 38042 Grenoble Cedex 9 (France); Hall-Wilton, R. [ESS, P.O. Box 176, SE-221 00 Lund (Sweden); Hultman, L. [Linköping University, SE-581, 83 Linköping (Sweden); Höglund, C. [ESS, P.O. Box 176, SE-221 00 Lund (Sweden); Linköping University, SE-581, 83 Linköping (Sweden); Guérard, B., E-mail: guerard@ill.fr [ILL, BP 156, 6, rue Jules Horowitz, 38042 Grenoble Cedex 9 (France); Jensen, J. [Linköping University, SE-581, 83 Linköping (Sweden); Khaplanov, A. [ILL, BP 156, 6, rue Jules Horowitz, 38042 Grenoble Cedex 9 (France); ESS, P.O. Box 176, SE-221 00 Lund (Sweden); Kirstein, O. [Linköping University, SE-581, 83 Linköping (Sweden); Piscitelli, F.; Van Esch, P. [ILL, BP 156, 6, rue Jules Horowitz, 38042 Grenoble Cedex 9 (France); Vettier, C. [ESS, P.O. Box 176, SE-221 00 Lund (Sweden)

    2013-08-21

    {sup 3}He was a popular material in neutrons detectors until its availability dropped drastically in 2008. The development of techniques based on alternative convertors is now of high priority for neutron research institutes. Thin films of {sup 10}B or {sup 10}B{sub 4}C have been used in gas proportional counters to detect neutrons, but until now, only for small or medium sensitive area. We present here the multi-grid design, introduced at the ILL and developed in collaboration with ESS for LAN (large area neutron) detectors. Typically thirty {sup 10}B{sub 4}C films of 1 μm thickness are used to convert neutrons into ionizing particles which are subsequently detected in a proportional gas counter. The principle and the fabrication of the multi-grid are described and some preliminary results obtained with a prototype of 200 cm×8 cm are reported; a detection efficiency of 48% has been measured at 2.5 Å with a monochromatic neutron beam line, showing the good potential of this new technique.

  20. Real-Time Market Concept Architecture for EcoGrid EU—A Prototype for European Smart Grids

    DEFF Research Database (Denmark)

    Ding, Yi; Pineda Morente, Salvador; Nyeng, Preben

    2014-01-01

    Industrialized countries are increasingly committed to move towards a low carbon generating mix by increasing the penetration of renewable generation. Additionally, the Development in communication technologies will allow small end-consumers and small-scale distributed energy resources (DER......) to participate in electricity markets. Current electricity markets need to be tailored to incorporate these changes regarding how electricity will be generated and consumed in the future. The EcoGrid EU is a large-scale EU-funded project, which establishes the first prototype of the future European intelligent...... grids. In this project, small-scale DERs and small end-consumers can actively participate in a new real-time electricity market by responding to 5-min real time electricity prices. In this way, the market operator will also obtain additional balancing power to cancel out the production variation...

  1. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  2. A Heuristic Approach to Author Name Disambiguation in Bibliometrics Databases for Large-scale Research Assessments

    NARCIS (Netherlands)

    D'Angelo, C.A.; Giuffrida, C.; Abramo, G.

    2011-01-01

    National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because

  3. Scaling of spectra in grid turbulence with a mean cross-stream temperature gradient

    Science.gov (United States)

    Bahri, Carla; Arwatz, Gilad; Mueller, Michael E.; George, William K.; Hultmark, Marcus

    2014-11-01

    Scaling of grid turbulence with a constant mean cross-stream temperature gradient is investigated using a combination of theoretical predictions, DNS, and experimental data. Conditions for self-similarity of the governing equations and the scalar spectrum are investigated, which reveals necessary conditions for self-similarity to exist. These conditions provide a theoretical framework for scaling of the temperature spectrum as well as the temperature flux spectrum. One necessary condition, predicted by the theory, is that the characteristic length scale describing the scalar spectrum must vary as √{ t} for a self-similar solution to exist. In order to investigate this, T-NSTAP sensors, specially designed for temperature measurements at high frequencies, were deployed in a heated passive grid turbulence setup together with conventional cold-wires, and complementary DNS calculations were performed to complement and complete the experimental data. These data are used to compare the behavior of different length scales and validate the theoretical predictions.

  4. Gridded National Inventory of U.S. Methane Emissions

    Science.gov (United States)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; hide

    2016-01-01

    We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  5. AGIS: The ATLAS Grid Information System

    OpenAIRE

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configurat...

  6. Development of a multi-grid FDTD code for three-dimensional simulation of large microwave sintering experiments

    Energy Technology Data Exchange (ETDEWEB)

    White, M.J.; Iskander, M.F. [Univ. of Utah, Salt Lake City, UT (United States). Electrical Engineering Dept.; Kimrey, H.D. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The Finite-Difference Time-Domain (FDTD) code available at the University of Utah has been used to simulate sintering of ceramics in single and multimode cavities, and many useful results have been reported in literature. More detailed and accurate results, specifically around and including the ceramic sample, are often desired to help evaluate the adequacy of the heating procedure. In electrically large multimode cavities, however, computer memory requirements limit the number of the mathematical cells, and the desired resolution is impractical to achieve due to limited computer resources. Therefore, an FDTD algorithm which incorporates multiple-grid regions with variable-grid sizes is required to adequately perform the desired simulations. In this paper the authors describe the development of a three-dimensional multi-grid FDTD code to help focus a large number of cells around the desired region. Test geometries were solved using a uniform-grid and the developed multi-grid code to help validate the results from the developed code. Results from these comparisons, as well as the results of comparisons between the developed FDTD code and other available variable-grid codes are presented. In addition, results from the simulation of realistic microwave sintering experiments showed improved resolution in critical sites inside the three-dimensional sintering cavity. With the validation of the FDTD code, simulations were performed for electrically large, multimode, microwave sintering cavities to fully demonstrate the advantages of the developed multi-grid FDTD code.

  7. Towards Large-area Field-scale Operational Evapotranspiration for Water Use Mapping

    Science.gov (United States)

    Senay, G. B.; Friedrichs, M.; Morton, C.; Huntington, J. L.; Verdin, J.

    2017-12-01

    Field-scale evapotranspiration (ET) estimates are needed for improving surface and groundwater use and water budget studies. Ideally, field-scale ET estimates would be at regional to national levels and cover long time periods. As a result of large data storage and computational requirements associated with processing field-scale satellite imagery such as Landsat, numerous challenges remain to develop operational ET estimates over large areas for detailed water use and availability studies. However, the combination of new science, data availability, and cloud computing technology is enabling unprecedented capabilities for ET mapping. To demonstrate this capability, we used Google's Earth Engine cloud computing platform to create nationwide annual ET estimates with 30-meter resolution Landsat ( 16,000 images) and gridded weather data using the Operational Simplified Surface Energy Balance (SSEBop) model in support of the National Water Census, a USGS research program designed to build decision support capacity for water management agencies and other natural resource managers. By leveraging Google's Earth Engine Application Programming Interface (API) and developing software in a collaborative, open-platform environment, we rapidly advance from research towards applications for large-area field-scale ET mapping. Cloud computing of the Landsat image archive combined with other satellite, climate, and weather data, is creating never imagined opportunities for assessing ET model behavior and uncertainty, and ultimately providing the ability for more robust operational monitoring and assessment of water use at field-scales.

  8. Symmetry-preserving regularization of wall-bounded turbulent flows

    International Nuclear Information System (INIS)

    Trias, F X; Gorobets, A; Oliva, A; Verstappen, R W C P

    2011-01-01

    The incompressible Navier-Stokes equations constitute an excellent mathematical modelization of turbulence. Unfortunately, attempts at performing direct simulations are limited to relatively low-Reynolds numbers because of the almost numberless small scales produced by the non-linear convective term. Alternatively, a dynamically less complex formulation is proposed here. Namely, regularizations of the Navier-Stokes equations that preserve the symmetry and conservation properties exactly. To do so, both convective and diffusive term are altered in the same vein. In this way, the convective production of small scales is effectively restrained whereas the modified diffusive term introduces an hyper-viscosity effect and consequently enhances the destruction of small scales. In practice, the only additional ingredient is a self-adjoint linear filter whose local filter length is determined from the requirement that vortex-stretching must stop at the smallest grid scale. To do so, a new criterion based on the invariants of the local strain tensor is proposed here. Altogether, the proposed method constitutes a parameter-free turbulence model.

  9. Improved visibility computation on massive grid terrains

    NARCIS (Netherlands)

    Fishman, J.; Haverkort, H.J.; Toma, L.; Wolfson, O.; Agrawal, D.; Lu, C.-T.

    2009-01-01

    This paper describes the design and engineering of algorithms for computing visibility maps on massive grid terrains. Given a terrain T, specified by the elevations of points in a regular grid, and given a viewpoint v, the visibility map or viewshed of v is the set of grid points of T that are

  10. The Impact of the Topology on Cascading Failures in a Power Grid Model

    NARCIS (Netherlands)

    Koç, Y.; Warnier, M.; Mieghem, P. van; Kooij, R.E.; Brazier, F.M.T.

    2014-01-01

    Cascading failures are one of the main reasons for large scale blackouts in power transmission grids. Secure electrical power supply requires, together with careful operation, a robust design of the electrical power grid topology. Currently, the impact of the topology on grid robustness is mainly

  11. Large-scale production and study of a synthetic G protein-coupled receptor: Human olfactory receptor 17-4

    Science.gov (United States)

    Cook, Brian L.; Steuerwald, Dirk; Kaiser, Liselotte; Graveland-Bikker, Johanna; Vanberghem, Melanie; Berke, Allison P.; Herlihy, Kara; Pick, Horst; Vogel, Horst; Zhang, Shuguang

    2009-01-01

    Although understanding of the olfactory system has progressed at the level of downstream receptor signaling and the wiring of olfactory neurons, the system remains poorly understood at the molecular level of the receptors and their interaction with and recognition of odorant ligands. The structure and functional mechanisms of these receptors still remain a tantalizing enigma, because numerous previous attempts at the large-scale production of functional olfactory receptors (ORs) have not been successful to date. To investigate the elusive biochemistry and molecular mechanisms of olfaction, we have developed a mammalian expression system for the large-scale production and purification of a functional OR protein in milligram quantities. Here, we report the study of human OR17-4 (hOR17-4) purified from a HEK293S tetracycline-inducible system. Scale-up of production yield was achieved through suspension culture in a bioreactor, which enabled the preparation of >10 mg of monomeric hOR17-4 receptor after immunoaffinity and size exclusion chromatography, with expression yields reaching 3 mg/L of culture medium. Several key post-translational modifications were identified using MS, and CD spectroscopy showed the receptor to be ≈50% α-helix, similar to other recently determined G protein-coupled receptor structures. Detergent-solubilized hOR17-4 specifically bound its known activating odorants lilial and floralozone in vitro, as measured by surface plasmon resonance. The hOR17-4 also recognized specific odorants in heterologous cells as determined by calcium ion mobilization. Our system is feasible for the production of large quantities of OR necessary for structural and functional analyses and research into OR biosensor devices. PMID:19581598

  12. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  13. Maturity grids as tools for change management

    DEFF Research Database (Denmark)

    Maier, Anja; Moultrie, James; Clarkson, P John

    2011-01-01

    A maturity grid is a change management tool. Levels of maturity are assigned against aspects of an area under study, thus creating a grid. Text descriptions at the resulting intersections describe the typical behaviour exhibited by a firm for each area under study and from the basis...... for the assessment scale. It is a flexible assessment technique that is used by practitioners in industry, consultants and researchers in academia for diagnostic, reflective and improvement purposes. A large number of maturity grids have been proposed to assess a range of capabilities including quality management...

  14. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  15. Grid Integration of Offshore Wind Farms via VSC-HVDC – Dynamic Stability Study

    DEFF Research Database (Denmark)

    Liu, Hongzhi

    farms could seriously impact the operation and stability of their interconnected power system. To assist in maintaining the power system stability when large disturbances occur in the grid, modern offshore wind farms consisting of variable-speed wind turbines are required to provide ancillary services...... such as voltage and frequency control. The greater distance to shore makes commonly used high voltage AC (HVAC) connection unsuitable economically and technically for large offshore wind farms. Alternatively, voltage source converter (VSC)-based high voltage DC (HVDC) transmission becomes more attractive...... and practical to integrate large-scale offshore wind farms into the onshore power grid, owing to its high capacity, advanced controllability and stabilization potential for AC networks etc. In this dissertation, some of the key technical issues with grid integration of large-scale offshore wind farms via VSC...

  16. Distributed Geant4 simulation in medical and space science applications using DIANE framework and the GRID

    CERN Document Server

    Moscicki, J T; Mantero, A; Pia, M G

    2003-01-01

    Distributed computing is one of the most important trends in IT which has recently gained significance for large-scale scientific applications. Distributed analysis environment (DIANE) is a R&D study, focusing on semiinteractive parallel and remote data analysis and simulation, which has been conducted at CERN. DIANE provides necessary software infrastructure for parallel scientific applications in the master-worker model. Advanced error recovery policies, automatic book-keeping of distributed jobs and on-line monitoring and control tools are provided. DIANE makes a transparent use of a number of different middleware implementations such as load balancing service (LSF, PBS, GRID Resource Broker, Condor) and security service (GSI, Kerberos, openssh). A number of distributed Geant 4 simulations have been deployed and tested, ranging from interactive radiotherapy treatment planning using dedicated clusters in hospitals, to globally-distributed simulations of astrophysics experiments using the European data g...

  17. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  18. Cognitive Radio for Smart Grid with Security Considerations

    Directory of Open Access Journals (Sweden)

    Khaled Shuaib

    2016-04-01

    Full Text Available In this paper, we investigate how Cognitive Radio as a means of communication can be utilized to serve a smart grid deployment end to end, from a home area network to power generation. We show how Cognitive Radio can be mapped to integrate the possible different communication networks within a smart grid large scale deployment. In addition, various applications in smart grid are defined and discussed showing how Cognitive Radio can be used to fulfill their communication requirements. Moreover, information security issues pertained to the use of Cognitive Radio in a smart grid environment at different levels and layers are discussed and mitigation techniques are suggested. Finally, the well-known Role-Based Access Control (RBAC is integrated with the Cognitive Radio part of a smart grid communication network to protect against unauthorized access to customer’s data and to the network at large.

  19. GRID : unlimited computing power on your desktop Conference MT17

    CERN Multimedia

    2001-01-01

    The Computational GRID is an analogy to the electrical power grid for computing resources. It decouples the provision of computing, data, and networking from its use, it allows large-scale pooling and sharing of resources distributed world-wide. Every computer, from a desktop to a mainframe or supercomputer, can provide computing power or data for the GRID. The final objective is to plug your computer into the wall and have direct access to huge computing resources immediately, just like plugging-in a lamp to get instant light. The GRID will facilitate world-wide scientific collaborations on an unprecedented scale. It will provide transparent access to major distributed resources of computer power, data, information, and collaborations.

  20. Implementation of large-scale average geostrophic wind shear in WAsP12.1

    DEFF Research Database (Denmark)

    Floors, Rogier Ralph; Troen, Ib; Kelly, Mark C.

    The vertical extrapolation model described in the European Wind Atlas Troen and Petersen (1989) is modified to take into account large-scale average geostrophic wind shear to describe the effect of horizontal temperature gradients on the geostrophic wind. The method is implemented by extracting...... the average geostrophic wind shear from Climate Forecast System Reanalysis (CFSR) data and the values of nearest grid point are automatically used in the WAsP 12.1 user interface to provide better AEP predictions....

  1. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  2. 9{sup th} international workshop on large-scale integration of wind power into power systems as well as on transmission networks for offshore wind power plants. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Betancourt, Uta; Ackermann, Thomas (eds.)

    2010-07-01

    Within the 9th International Workshop on large-scale integration of wind power into power systems as well as on transmission networks for offshore wind power plants at 18th to 19th October, 2010 in Quebec (Canada), lectures and poster papers were presented to the following themes: (1) Keynote session and panel; (2) European grid integration studies; (3) Modeling; (4) Wind forecasting; (5) North American grid integration studies; (6) Voltage stability and control; (7) Grid codes and impact studies; (8) Canadian University research (WESNet); (9) Operation and dispatch; (9) Offshore wind power plants; (10) Frequency Control; (11) Methodologies to estimate wind power impacts on power systems, summaries from IEAWIND collaboration; (12) HVDC; (13) Grid codes and system impact studies; (14) Modeling and validation; (15) Regulations, markets and offshore wind energy; (16) Integration issues; (17) Wind turbine control system; (18) Energy management and IT solutions.

  3. Performance evaluation of 10 MW grid connected solar photovoltaic power plant in India

    OpenAIRE

    B. Shiva Kumar; K. Sudhakar

    2015-01-01

    The growing energy demand in developing nations has triggered the issue of energy security. This has made essential to utilize the untapped potential of renewable resources. Grid connected PV systems have become the best alternatives in renewable energy at large scale. Performance analysis of these grid connected plants could help in designing, operating and maintenance of new grid connected systems. A 10 MW photovoltaic grid connected power plant commissioned at Ramagundam is one of the larg...

  4. Large Scale Demand Response of Thermostatic Loads

    DEFF Research Database (Denmark)

    Totu, Luminita Cristiana

    This study is concerned with large populations of residential thermostatic loads (e.g. refrigerators, air conditioning or heat pumps). The purpose is to gain control over the aggregate power consumption in order to provide balancing services for the electrical grid. Without affecting the temperat......This study is concerned with large populations of residential thermostatic loads (e.g. refrigerators, air conditioning or heat pumps). The purpose is to gain control over the aggregate power consumption in order to provide balancing services for the electrical grid. Without affecting....... The control architecture is defined by parsimonious communication requirements that also have a high level data privacy, and it furthermore guarantees a robust and secure local operation. Mathematical models are put forward, and the effectiveness is shown by numerical simulations. A case study of 10000...

  5. Mapping the distribution of the denitrifier community at large scales (Invited)

    Science.gov (United States)

    Philippot, L.; Bru, D.; Ramette, A.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.

    2010-12-01

    Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 740 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.

  6. Development of the Large-Scale Statistical Analysis System of Satellites Observations Data with Grid Datafarm Architecture

    Science.gov (United States)

    Yamamoto, K.; Murata, K.; Kimura, E.; Honda, R.

    2006-12-01

    In the Solar-Terrestrial Physics (STP) field, the amount of satellite observation data has been increasing every year. It is necessary to solve the following three problems to achieve large-scale statistical analyses of plenty of data. (i) More CPU power and larger memory and disk size are required. However, total powers of personal computers are not enough to analyze such amount of data. Super-computers provide a high performance CPU and rich memory area, but they are usually separated from the Internet or connected only for the purpose of programming or data file transfer. (ii) Most of the observation data files are managed at distributed data sites over the Internet. Users have to know where the data files are located. (iii) Since no common data format in the STP field is available now, users have to prepare reading program for each data by themselves. To overcome the problems (i) and (ii), we constructed a parallel and distributed data analysis environment based on the Gfarm reference implementation of the Grid Datafarm architecture. The Gfarm shares both computational resources and perform parallel distributed processings. In addition, the Gfarm provides the Gfarm filesystem which can be as virtual directory tree among nodes. The Gfarm environment is composed of three parts; a metadata server to manage distributed files information, filesystem nodes to provide computational resources and a client to throw a job into metadata server and manages data processing schedulings. In the present study, both data files and data processes are parallelized on the Gfarm with 6 file system nodes: CPU clock frequency of each node is Pentium V 1GHz, 256MB memory and40GB disk. To evaluate performances of the present Gfarm system, we scanned plenty of data files, the size of which is about 300MB for each, in three processing methods: sequential processing in one node, sequential processing by each node and parallel processing by each node. As a result, in comparison between the

  7. Block Fusion on Dynamically Adaptive Spacetree Grids for Shallow Water Waves

    KAUST Repository

    Weinzierl, Tobias

    2014-09-01

    © 2014 World Scientific Publishing Company. Spacetrees are a popular formalism to describe dynamically adaptive Cartesian grids. Even though they directly yield a mesh, it is often computationally reasonable to embed regular Cartesian blocks into their leaves. This promotes stencils working on homogeneous data chunks. The choice of a proper block size is sensitive. While large block sizes foster loop parallelism and vectorisation, they restrict the adaptivity\\'s granularity and hence increase the memory footprint and lower the numerical accuracy per byte. In the present paper, we therefore use a multiscale spacetree-block coupling admitting blocks on all spacetree nodes. We propose to find sets of blocks on the finest scale throughout the simulation and to replace them by fused big blocks. Such a replacement strategy can pick up hardware characteristics, i.e. which block size yields the highest throughput, while the dynamic adaptivity of the fine grid mesh is not constrained - applications can work with fine granular blocks. We study the fusion with a state-of-the-art shallow water solver at hands of an Intel Sandy Bridge and a Xeon Phi processor where we anticipate their reaction to selected block optimisation and vectorisation.

  8. ASP - Grid connections of large power generating units; ASP - Anslutning av stoerre produktionsanlaeggningar till elnaetet

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, Aake; Larsson, Richard [Vattenfall Power Consultants, Stockholm (Sweden)

    2006-12-15

    Grid connections of large power generating units normally require more detailed studies compared to small single units. The required R and D-level depends on the specific characteristics of the production units and the connecting grid. An inquiry for a grid connection will raise questions for the grid owner regarding transmission capability, losses, fault currents, relay protection, dynamic stability etc. Then only a few larger wind farms have been built, the experiences from these types of grid connections are limited and for that reason it can be difficult to identify issues appropriate for further studies. To ensure that electric power generating units do not have unacceptable impact on the grid, directions from the Swedish TSO (Svenska Kraftnaet) have been stated. The directions deal, for example, with power generation in specific ranges of voltage level and frequency and the possibility to remain connected to the grid when different faults occur. The requirements and the consequences of these directions are illustrated. There are three main issues that should be considered: Influence on the power flow from generating units regarding voltage level, currents, losses etc.; Different types of electric systems in generating units contribute to different levels of fault currents. For that reason the resulting fault current levels have to be studied; It is required that generating units should remain connected to the grid at different modes of operation and faults. These modes have to be verified. Load flow and dynamic studies normally demand computer models. Comprehensive models, for instance of wind farms, can bee difficult to design and normally large computer capacity is required. Therefore simplified methods to perform relevant studies are described. How to model an electric power generating unit regarding fault currents and dynamic stability is described. An inquiry for a grid connection normally brings about a discussion concerning administration. To make it

  9. Integration of HTS Cables in the Future Grid of the Netherlands

    OpenAIRE

    Zuijderduin, R.; Chevchenko, O.; Smit, J.J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    2012-01-01

    ue to increasing power demand, the electricity grid of the Netherlands is changing. The future transmission grid will obtain electrical power generated by decentralized renewable sources, together with large scale generation units located at the coastal region. In this way electrical power has to be distributed and transmitted over longer distances from generation to end user. Potential grid issues like: amount of distributed power, grid stability and electrical loss dissipation merit particu...

  10. Insights into Tikhonov regularization: application to trace gas column retrieval and the efficient calculation of total column averaging kernels

    Directory of Open Access Journals (Sweden)

    T. Borsdorff

    2014-02-01

    Full Text Available Insights are given into Tikhonov regularization and its application to the retrieval of vertical column densities of atmospheric trace gases from remote sensing measurements. The study builds upon the equivalence of the least-squares profile-scaling approach and Tikhonov regularization method of the first kind with an infinite regularization strength. Here, the vertical profile is expressed relative to a reference profile. On the basis of this, we propose a new algorithm as an extension of the least-squares profile scaling which permits the calculation of total column averaging kernels on arbitrary vertical grids using an analytic expression. Moreover, we discuss the effective null space of the retrieval, which comprises those parts of a vertical trace gas distribution which cannot be inferred from the measurements. Numerically the algorithm can be implemented in a robust and efficient manner. In particular for operational data processing with challenging demands on processing time, the proposed inversion method in combination with highly efficient forward models is an asset. For demonstration purposes, we apply the algorithm to CO column retrieval from simulated measurements in the 2.3 μm spectral region and to O3 column retrieval from the UV. These represent ideal measurements of a series of spaceborne spectrometers such as SCIAMACHY, TROPOMI, GOME, and GOME-2. For both spectral ranges, we consider clear-sky and cloudy scenes where clouds are modelled as an elevated Lambertian surface. Here, the smoothing error for the clear-sky and cloudy atmosphere is significant and reaches several percent, depending on the reference profile which is used for scaling. This underlines the importance of the column averaging kernel for a proper interpretation of retrieved column densities. Furthermore, we show that the smoothing due to regularization can be underestimated by calculating the column averaging kernel on a too coarse vertical grid. For both

  11. Clutter-free Visualization of Large Point Symbols at Multiple Scales by Offset Quadtrees

    Directory of Open Access Journals (Sweden)

    ZHANG Xiang

    2016-08-01

    Full Text Available To address the cartographic problems in map mash-up applications in the Web 2.0 context, this paper studies a clutter-free technique for visualizing large symbols on Web maps. Basically, a quadtree is used to select one symbol in each grid cell at each zoom level. To resolve the symbol overlaps between neighboring quad-grids, multiple offsets are applied to the quadtree and a voting strategy is used to compute the significant level of symbols for their selection at multiple scales. The method is able to resolve spatial conflicts without explicit conflict detection, thus enabling a highly efficient processing. Also the resulting map forms a visual hierarchy of semantic importance. We discuss issues such as the relative importance, symbol-to-grid size ratio, and effective offset schemes, and propose two extensions to make better use of the free space available on the map. Experiments were carried out to validate the technique,which demonstrates its robustness and efficiency (a non-optimal implementation leads to a sub-second processing for datasets of a 105 magnitude.

  12. Simulation of the large-scale offshore-wind farms including HVDC-grid connections using the simulation tool VIAvento

    Energy Technology Data Exchange (ETDEWEB)

    Bartelt, R.; Heising, C.; Ni, B. [Avasition GmbH, Dortmund (Germany); Zadeh, M. Koochack; Lebioda, T.J.; Jung, J. [TenneT Offshore GmbH, Bayreuth (Germany)

    2012-07-01

    Within the framework of a research project the stability of the offshore grid especially in terms of sub-harmonic stability for the likely future extension stage of the offshore grids i.e. having parallel connection of two or more HVDC links and for certain operating scenarios e.g. overload scenario will be investigated. For this purpose, a comprehensive scenario-based assessment in time domain is unavoidable. Within this paper, the simulation tool VIAvento is briefly presented which allows for these comprehensive time-domain simulations taking the special characteristics of power-electronic assets into account. The core maxims of VIAvento are presented. Afterwards, the capability of VIAvento is demonstrated with simulation results of two wind farms linked via a HVDC grid connection system (160 converters and two HVDC stations in modular multilevel converter topology). (orig.)

  13. Research on wind power grid-connected operation and dispatching strategies of Liaoning power grid

    Science.gov (United States)

    Han, Qiu; Qu, Zhi; Zhou, Zhi; He, Xiaoyang; Li, Tie; Jin, Xiaoming; Li, Jinze; Ling, Zhaowei

    2018-02-01

    As a kind of clean energy, wind power has gained rapid development in recent years. Liaoning Province has abundant wind resources and the total installed capacity of wind power is in the forefront. With the large-scale wind power grid-connected operation, the contradiction between wind power utilization and peak load regulation of power grid has been more prominent. To this point, starting with the power structure and power grid installation situation of Liaoning power grid, the distribution and the space-time output characteristics of wind farm, the prediction accuracy, the curtailment and the off-grid situation of wind power are analyzed. Based on the deep analysis of the seasonal characteristics of power network load, the composition and distribution of main load are presented. Aiming at the problem between the acceptance of wind power and power grid adjustment, the scheduling strategies are given, including unit maintenance scheduling, spinning reserve, energy storage equipment settings by the analysis of the operation characteristics and the response time of thermal power units and hydroelectric units, which can meet the demand of wind power acceptance and provide a solution to improve the level of power grid dispatching.

  14. Security on the US Fusion Grid

    Energy Technology Data Exchange (ETDEWEB)

    Burruss, Justin R.; Fredian, Tom W.; Thompson, Mary R.

    2005-06-01

    The National Fusion Collaboratory project is developing and deploying new distributed computing and remote collaboration technologies with the goal of advancing magnetic fusion energy research. This work has led to the development of the US Fusion Grid (FusionGrid), a computational grid composed of collaborative, compute, and data resources from the three large US fusion research facilities and with users both in the US and in Europe. Critical to the development of FusionGrid was the creation and deployment of technologies to ensure security in a heterogeneous environment. These solutions to the problems of authentication, authorization, data transfer, and secure data storage, as well as the lessons learned during the development of these solutions, may be applied outside of FusionGrid and scale to future computing infrastructures such as those for next-generation devices like ITER.

  15. Security on the US Fusion Grid

    International Nuclear Information System (INIS)

    Burruss, Justin R.; Fredian, Tom W.; Thompson, Mary R.

    2005-01-01

    The National Fusion Collaboratory project is developing and deploying new distributed computing and remote collaboration technologies with the goal of advancing magnetic fusion energy research. This work has led to the development of the US Fusion Grid (FusionGrid), a computational grid composed of collaborative, compute, and data resources from the three large US fusion research facilities and with users both in the US and in Europe. Critical to the development of FusionGrid was the creation and deployment of technologies to ensure security in a heterogeneous environment. These solutions to the problems of authentication, authorization, data transfer, and secure data storage, as well as the lessons learned during the development of these solutions, may be applied outside of FusionGrid and scale to future computing infrastructures such as those for next-generation devices like ITER

  16. Security on the US fusion grid

    International Nuclear Information System (INIS)

    Burruss, J.R.; Fredian, T.W.; Thompson, M.R.

    2006-01-01

    The National Fusion Collaboratory project is developing and deploying new distributed computing and remote collaboration technologies with the goal of advancing magnetic fusion energy research. This has led to the development of the U.S. fusion grid (FusionGrid), a computational grid composed of collaborative, compute, and data resources from the three large U.S. fusion research facilities and with users both in the U.S. and in Europe. Critical to the development of FusionGrid was the creation and deployment of technologies to ensure security in a heterogeneous environment. These solutions to the problems of authentication, authorization, data transfer, and secure data storage, as well as the lessons learned during the development of these solutions, may be applied outside of FusionGrid and scale to future computing infrastructures such as those for next-generation devices like ITER

  17. Optimal variable-grid finite-difference modeling for porous media

    International Nuclear Information System (INIS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-01-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs. (paper)

  18. Rapid and large-scale synthesis of Co3O4 octahedron particles with very high catalytic activity, good supercapacitance and unique magnetic property

    CSIR Research Space (South Africa)

    Chowdhury, M

    2015-12-01

    Full Text Available Scarcity of rapid and large scale synthesis of functional materials, hinders the progress from laboratory scale to commercial applications. In this study, we report a rapid and large scale synthesis of Co(Sub3)O(sub4) octahedron micron size (1.3 µm...

  19. The power of the offshore (super-) grid in advancing marine regionalization

    NARCIS (Netherlands)

    Jay, S.A.; Toonen, H.M.

    2015-01-01

    Large scale and transnational electricity grids facilitate balancing capacity across the areas that they serve and increase potential for energy trading. Offshore grids and the more ambitious notion of supergrids are beginning to play a significant part, especially in Europe, in the realization

  20. Parallel processing and non-uniform grids in global air quality modeling

    NARCIS (Netherlands)

    Berkvens, P.J.F.; Bochev, Mikhail A.

    2002-01-01

    A large-scale global air quality model, running efficiently on a single vector processor, is enhanced to make more realistic and more long-term simulations feasible. Two strategies are combined: non-uniform grids and parallel processing. The communication through the hierarchy of non-uniform grids

  1. Integration of HTS Cables in the Future Grid of the Netherlands

    NARCIS (Netherlands)

    Zuijderduin, R.; Chevchenko, O.; Smit, J.J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    2012-01-01

    ue to increasing power demand, the electricity grid of the Netherlands is changing. The future transmission grid will obtain electrical power generated by decentralized renewable sources, together with large scale generation units located at the coastal region. In this way electrical power has to be

  2. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    Energy Technology Data Exchange (ETDEWEB)

    Lenormand, R.; Thiele, M.R. [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  3. Trends in life science grid: from computing grid to knowledge grid

    Directory of Open Access Journals (Sweden)

    Konagaya Akihiko

    2006-12-01

    Full Text Available Abstract Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  4. An insulating grid spacer for large-area MICROMEGAS chambers

    International Nuclear Information System (INIS)

    Bernard, D.; Delagrange, H.; D'Enterria, D.G.; Guay, M.L.M. Le; Martinez, G.; Mora, M.J.; Pichot, P.; Roy, D.; Schutz, Y.; Gandi, A.; Oliveira, R. de

    2002-01-01

    We present a novel design for large-area gaseous detectors based on the MICROMEGAS technology. This technology incorporates an insulating grid, sandwiched between the micro-mesh and the anode-pad plane, which provides a uniform 200 μm amplification gap. The uniformity of the amplification gap thickness has been verified. The gain performances of the detector are presented and compared to the values obtained with detectors using cylindrical micro spacers. The new design presents several technical and financial advantages

  5. Spectrally-consistent regularization modeling of turbulent natural convection flows

    International Nuclear Information System (INIS)

    Trias, F Xavier; Gorobets, Andrey; Oliva, Assensi; Verstappen, Roel

    2012-01-01

    The incompressible Navier-Stokes equations constitute an excellent mathematical modelization of turbulence. Unfortunately, attempts at performing direct simulations are limited to relatively low-Reynolds numbers because of the almost numberless small scales produced by the non-linear convective term. Alternatively, a dynamically less complex formulation is proposed here. Namely, regularizations of the Navier-Stokes equations that preserve the symmetry and conservation properties exactly. To do so, both convective and diffusive terms are altered in the same vein. In this way, the convective production of small scales is effectively restrained whereas the modified diffusive term introduces a hyperviscosity effect and consequently enhances the destruction of small scales. In practice, the only additional ingredient is a self-adjoint linear filter whose local filter length is determined from the requirement that vortex-stretching must stop at the smallest grid scale. In the present work, the performance of the above-mentioned recent improvements is assessed through application to turbulent natural convection flows by means of comparison with DNS reference data.

  6. AC HTS Transmission Cable for Integration into the Future EHV Grid of the Netherlands

    NARCIS (Netherlands)

    Zuijderduin, R.; Chevtchenko, O.; Smit, J.J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    2012-01-01

    Due to increasing power demand, the electricity grid of the Netherlands is changing. The future grid must be capable to transmit all the connected power. Power generation will be more decentralized like for instance wind parks connected to the grid. Furthermore, future large scale production units

  7. The application of liquid air energy storage for large scale long duration solutions to grid balancing

    Science.gov (United States)

    Brett, Gareth; Barnett, Matthew

    2014-12-01

    Liquid Air Energy Storage (LAES) provides large scale, long duration energy storage at the point of demand in the 5 MW/20 MWh to 100 MW/1,000 MWh range. LAES combines mature components from the industrial gas and electricity industries assembled in a novel process and is one of the few storage technologies that can be delivered at large scale, with no geographical constraints. The system uses no exotic materials or scarce resources and all major components have a proven lifetime of 25+ years. The system can also integrate low grade waste heat to increase power output. Founded in 2005, Highview Power Storage, is a UK based developer of LAES. The company has taken the concept from academic analysis, through laboratory testing, and in 2011 commissioned the world's first fully integrated system at pilot plant scale (300 kW/2.5 MWh) hosted at SSE's (Scottish & Southern Energy) 80 MW Biomass Plant in Greater London which was partly funded by a Department of Energy and Climate Change (DECC) grant. Highview is now working with commercial customers to deploy multi MW commercial reference plants in the UK and abroad.

  8. Coordinated learning of grid cell and place cell spatial and temporal properties: multiple scales, attention and oscillations.

    Science.gov (United States)

    Grossberg, Stephen; Pilly, Praveen K

    2014-02-05

    A neural model proposes how entorhinal grid cells and hippocampal place cells may develop as spatial categories in a hierarchy of self-organizing maps (SOMs). The model responds to realistic rat navigational trajectories by learning both grid cells with hexagonal grid firing fields of multiple spatial scales, and place cells with one or more firing fields, that match neurophysiological data about their development in juvenile rats. Both grid and place cells can develop by detecting, learning and remembering the most frequent and energetic co-occurrences of their inputs. The model's parsimonious properties include: similar ring attractor mechanisms process linear and angular path integration inputs that drive map learning; the same SOM mechanisms can learn grid cell and place cell receptive fields; and the learning of the dorsoventral organization of multiple spatial scale modules through medial entorhinal cortex to hippocampus (HC) may use mechanisms homologous to those for temporal learning through lateral entorhinal cortex to HC ('neural relativity'). The model clarifies how top-down HC-to-entorhinal attentional mechanisms may stabilize map learning, simulates how hippocampal inactivation may disrupt grid cells, and explains data about theta, beta and gamma oscillations. The article also compares the three main types of grid cell models in the light of recent data.

  9. Diverse assemblies of the (4,4) grid layers exemplified in Zn(II)/Co(II) coordination polymers with dual linear ligands

    International Nuclear Information System (INIS)

    Liu, Guang-Zhen; Li, Xiao-Dong; Xin, Ling-Yun; Li, Xiao-Ling; Wang, Li-Ya

    2013-01-01

    Diverse (4,4) grid layers are exemplified in five two-dimensional coordination polymers with dual µ 2 -bridged ligands, namely, ([Zn(cbaa)(bpp)]·H 2 O) n (1), [Zn 2 (cbaa) 2 (bpy)] n (2), [Co 2 (cbaa) 2 (bpp) 2 ] n (3), [Co(cbaa)(bpp)] n (4), and [Co(bdaa)(bpp)(H 2 O) 2 ] n (5) (H 2 cbaa=4-carboxybenzeneacetic acid, bpp=1,3-di(4-pyridyl)propane, bpy=4,4′-bipyridyl, and H 2 bdaa=1,4-benzenediacrylic acid). For 1, two (4,4) grid layers with [ZnN 2 O 2 ] tetrahedron as the node are held together by lattice water forming a H-bonding bilayer. Individual (4,4) grid layer in 2 is based on (Zn 2 (OCO) 4 ) paddlewheel unit as the node. Two (4,4) grid layers with (Co 2 O(OCO) 2 ) dimer as the node are covalently interconnected by organic ligands affording a thick bilayer of 3 with new framework topology. The different entanglements between two coincident (4,4) grid layers with [CoN 2 O 4 ] octahedron as the node leads to two 2D→2D interpenetrated structures for 4 and 5. Furthermore, fluorescent properties of 1 and 2 as well as magnetic properties of 3 are investigated. - Graphical abstract: Diverse assemblies of the (4,4) grid layers with different network nodes forms five coordination polymers that are well characterized by IR, TGA, element analysis, fluorescent and magnetic measurement. - Highlights: • Diverse assemblies of the (4,4) grid layers with different structural units as the nodes. • A new topology type with the uninodal 6-connected net of (4 12 .5 2 .6) is found. • Intense fluorescence emissions with a rare blue-shift of 55 nm compared to free carboxylate ligand

  10. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    Science.gov (United States)

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  11. Scaling up HIV viral load - lessons from the large-scale implementation of HIV early infant diagnosis and CD4 testing.

    Science.gov (United States)

    Peter, Trevor; Zeh, Clement; Katz, Zachary; Elbireer, Ali; Alemayehu, Bereket; Vojnov, Lara; Costa, Alex; Doi, Naoko; Jani, Ilesh

    2017-11-01

    The scale-up of effective HIV viral load (VL) testing is an urgent public health priority. Implementation of testing is supported by the availability of accurate, nucleic acid based laboratory and point-of-care (POC) VL technologies and strong WHO guidance recommending routine testing to identify treatment failure. However, test implementation faces challenges related to the developing health systems in many low-resource countries. The purpose of this commentary is to review the challenges and solutions from the large-scale implementation of other diagnostic tests, namely nucleic-acid based early infant HIV diagnosis (EID) and CD4 testing, and identify key lessons to inform the scale-up of VL. Experience with EID and CD4 testing provides many key lessons to inform VL implementation and may enable more effective and rapid scale-up. The primary lessons from earlier implementation efforts are to strengthen linkage to clinical care after testing, and to improve the efficiency of testing. Opportunities to improve linkage include data systems to support the follow-up of patients through the cascade of care and test delivery, rapid sample referral networks, and POC tests. Opportunities to increase testing efficiency include improvements to procurement and supply chain practices, well connected tiered laboratory networks with rational deployment of test capacity across different levels of health services, routine resource mapping and mobilization to ensure adequate resources for testing programs, and improved operational and quality management of testing services. If applied to VL testing programs, these approaches could help improve the impact of VL on ART failure management and patient outcomes, reduce overall costs and help ensure the sustainable access to reduced pricing for test commodities, as well as improve supportive health systems such as efficient, and more rigorous quality assurance. These lessons draw from traditional laboratory practices as well as fields

  12. Architecture for large-scale automatic web accessibility evaluation based on the UWEM methodology

    DEFF Research Database (Denmark)

    Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.

    2008-01-01

    The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...... challenges that the project faced and the solutions developed towards building a system capable of regular large-scale accessibility evaluations with sufficient capacity and stability. It also outlines some possible future architectural improvements....

  13. Upscaling of Large-Scale Transport in Spatially Heterogeneous Porous Media Using Wavelet Transformation

    Science.gov (United States)

    Moslehi, M.; de Barros, F.; Ebrahimi, F.; Sahimi, M.

    2015-12-01

    Modeling flow and solute transport in large-scale heterogeneous porous media involves substantial computational burdens. A common approach to alleviate this complexity is to utilize upscaling methods. These processes generate upscaled models with less complexity while attempting to preserve the hydrogeological properties comparable to the original fine-scale model. We use Wavelet Transformations (WT) of the spatial distribution of aquifer's property to upscale the hydrogeological models and consequently transport processes. In particular, we apply the technique to a porous formation with broadly distributed and correlated transmissivity to verify the performance of the WT. First, transmissivity fields are coarsened using WT in such a way that the high transmissivity zones, in which more important information is embedded, mostly remain the same, while the low transmissivity zones are averaged out since they contain less information about the hydrogeological formation. Next, flow and non-reactive transport are simulated in both fine-scale and upscaled models to predict both the concentration breakthrough curves at a control location and the large-scale spreading of the plume around its centroid. The results reveal that the WT of the fields generates non-uniform grids with an average of 2.1% of the number of grid blocks in the original fine-scale models, which eventually leads to a significant reduction in the computational costs. We show that the upscaled model obtained through the WT reconstructs the concentration breakthrough curves and the spreading of the plume at different times accurately. Furthermore, the impacts of the Hurst coefficient, size of the flow domain and the orders of magnitude difference in transmissivity values on the results have been investigated. It is observed that as the heterogeneity and the size of the domain increase, better agreement between the results of fine-scale and upscaled models can be achieved. Having this framework at hand aids

  14. Air-chemistry "turbulence": power-law scaling and statistical regularity

    Directory of Open Access Journals (Sweden)

    H.-m. Hsu

    2011-08-01

    Full Text Available With the intent to gain further knowledge on the spectral structures and statistical regularities of surface atmospheric chemistry, the chemical gases (NO, NO2, NOx, CO, SO2, and O3 and aerosol (PM10 measured at 74 air quality monitoring stations over the island of Taiwan are analyzed for the year of 2004 at hourly resolution. They represent a range of surface air quality with a mixed combination of geographic settings, and include urban/rural, coastal/inland, plain/hill, and industrial/agricultural locations. In addition to the well-known semi-diurnal and diurnal oscillations, weekly, and intermediate (20 ~ 30 days peaks are also identified with the continuous wavelet transform (CWT. The spectra indicate power-law scaling regions for the frequencies higher than the diurnal and those lower than the diurnal with the average exponents of −5/3 and −1, respectively. These dual-exponents are corroborated with those with the detrended fluctuation analysis in the corresponding time-lag regions. These exponents are mostly independent of the averages and standard deviations of time series measured at various geographic settings, i.e., the spatial inhomogeneities. In other words, they possess dominant universal structures. After spectral coefficients from the CWT decomposition are grouped according to the spectral bands, and inverted separately, the PDFs of the reconstructed time series for the high-frequency band demonstrate the interesting statistical regularity, −3 power-law scaling for the heavy tails, consistently. Such spectral peaks, dual-exponent structures, and power-law scaling in heavy tails are important structural information, but their relations to turbulence and mesoscale variability require further investigations. This could lead to a better understanding of the processes controlling air quality.

  15. Synthesis of ordered large-scale ZnO nanopore arrays

    International Nuclear Information System (INIS)

    Ding, G.Q.; Shen, W.Z.; Zheng, M.J.; Fan, D.H.

    2006-01-01

    An effective approach is demonstrated for growing ordered large-scale ZnO nanopore arrays through radio-frequency magnetron sputtering deposition on porous alumina membranes (PAMs). The realization of highly ordered hexagonal ZnO nanopore arrays benefits from the unique properties of ZnO (hexagonal structure, polar surfaces, and preferable growth directions) and PAMs (controllable hexagonal nanopores and localized negative charges). Further evidence has been shown through the effects of nanorod size and thermal treatment of PAMs on the yielded morphology of ZnO nanopore arrays. This approach opens the possibility of creating regular semiconducting nanopore arrays for the application of filters, sensors, and templates

  16. Numerical aspects of drift kinetic turbulence: Ill-posedness, regularization and a priori estimates of sub-grid-scale terms

    KAUST Repository

    Samtaney, Ravi

    2012-01-01

    of a simple collisional model, by inclusion of an ad-hoc hyperviscosity or artificial viscosity term or by implicit dissipation in upwind schemes. Comparisons between the various methods and regularizations are presented. We apply a filtering formalism

  17. GridWise Standards Mapping Overview

    Energy Technology Data Exchange (ETDEWEB)

    Bosquet, Mia L.

    2004-04-01

    ''GridWise'' is a concept of how advanced communications, information and controls technology can transform the nation's energy system--across the spectrum of large scale, central generation to common consumer appliances and equipment--into a collaborative network, rich in the exchange of decision making information and an abundance of market-based opportunities (Widergren and Bosquet 2003) accompanying the electric transmission and distribution system fully into the information and telecommunication age. This report summarizes a broad review of standards efforts which are related to GridWise--those which could ultimately contribute significantly to advancements toward the GridWise vision, or those which represent today's current technological basis upon which this vision must build.

  18. State-of-the-art of large scale biogas plants

    International Nuclear Information System (INIS)

    Prisum, J.M.; Noergaard, P.

    1992-01-01

    A survey of the technological state of large scale biogas plants in Europe treating manure is given. 83 plants are in operation at present. Of these, 16 are centralised digestion plants. Transport costs at centralised digestion plants amounts to between 25 and 40 percent of the total operational costs. Various transport equipment is used. Most large scale digesters are CSTRs, but serial, contact, 2-step, and plug-flow digesters are also found. Construction materials are mostly steel and concrete. Mesophilic digestion is most common (56%), thermophilic digestion is used in 17% of the plants, combined mesophilic and thermophilic digestion is used in 28% of the centralised plants. Mixing of digester content is performed with gas injection, propellers, and gas-liquid displacement. Heating is carried out using external or internal heat exchangers. Heat recovery is only used in Denmark. Gas purification equipment is commonplace, but not often needed. Several plants use separation of the digested manure, often as part of a post-treatment/-purification process or for the production of 'compost'. Screens, sieve belt separaters, centrifuges and filter presses are employed. The use of biogas varies considerably. In some cases, combined heat and power stations are supplying the grid and district heating systems. Other plants use only either the electricity or heat. (au)

  19. Financial Derivatives Market for Grid Computing

    CERN Document Server

    Aubert, David; Lindset, Snorre; Huuse, Henning

    2007-01-01

    This Master thesis studies the feasibility and properties of a financial derivatives market on Grid computing, a service for sharing computing resources over a network such as the Internet. For the European Organization for Nuclear Research (CERN) to perform research with the world's largest and most complex machine, the Large Hadron Collider (LHC), Grid computing was developed to handle the information created. In accordance with the mandate of CERN Technology Transfer (TT) group, this thesis is a part of CERN's dissemination of the Grid technology. The thesis gives a brief overview of the use of the Grid technology and where it is heading. IT trend analysts and large-scale IT vendors see this technology as key in transforming the world of IT. They predict that in a matter of years, IT will be bought as a service, instead of a good. Commoditization of IT, delivered as a service, is a paradigm shift that will have a broad impact on all parts of the IT market, as well as on the society as a whole. Political, e...

  20. Operation strategy for a lab-scale grid-connected photovoltaic generation system integrated with battery energy storage

    International Nuclear Information System (INIS)

    Jou, Hurng-Liahng; Chang, Yi-Hao; Wu, Jinn-Chang; Wu, Kuen-Der

    2015-01-01

    Highlights: • The operation strategy for grid-connected PV generation system integrated with battery energy storage is proposed. • The PV system is composed of an inverter and two DC-DC converter. • The negative impact of grid-connected PV generation systems on the grid can be alleviated by integrating a battery. • The operation of the developed system can be divided into nine modes. - Abstract: The operation strategy for a lab-scale grid-connected photovoltaic generation system integrated with battery energy storage is proposed in this paper. The photovoltaic generation system is composed of a full-bridge inverter, a DC–DC boost converter, an isolated bidirectional DC–DC converter, a solar cell array and a battery set. Since the battery set acts as an energy buffer to adjust the power generation of the solar cell array, the negative impact on power quality caused by the intermittent and unstable output power from a solar cell array is alleviated, so the penetration rate of the grid-connected photovoltaic generation system is increased. A lab-scale prototype is developed to verify the performance of the system. The experimental results show that it achieves the expected performance

  1. Large scale filaments associated with Milky Way spiral arms

    Science.gov (United States)

    Wang, Ke; Testi, Leonardo; Ginsburg, Adam; Walmsley, Malcolm; Molinari, Sergio; Schisano, Eugenio

    2015-08-01

    The ubiquity of filamentary structure at various scales through out the Galaxy has triggered a renewed interest in their formation, evolution, and role in star formation. The largest filaments can reach up to Galactic scale as part of the spiral arm structure. However, such large scale filaments are hard to identify systematically due to limitations in identifying methodology (i.e., as extinction features). We present a new approach to directly search for the largest, coldest, and densest filaments in the Galaxy, making use of sensitive Herschel Hi-GAL data complemented by spectral line cubes. We present a sample of the 9 most prominent Herschel filaments from a pilot search field. These filaments measure 37-99 pc long and 0.6-3.0 pc wide with masses (0.5-8.3)×104 Msun, and beam-averaged (28", or 0.4-0.7 pc) peak H2 column densities of (1.7-9.3)x1022 cm-2. The bulk of the filaments are relatively cold (17-21 K), while some local clumps have a dust temperature up to 25-47 K due to local star formation activities. All the filaments are located within spiral arm model incorporating the latest parallax measurements, we find that 7/9 of them reside within arms, but most are close to arm edges. These filaments are comparable in length to the Galactic scale height and therefore are not simply part of a grander turbulent cascade. These giant filaments, which often contain regularly spaced pc-scale clumps, are much larger than the filaments found in the Herschel Gould's Belt Survey, and they form the upper ends in the filamentary hierarchy. Full operational ALMA and NOEMA will be able to resolve and characterize similar filaments in nearby spiral galaxies, allowing us to compare the star formation in a uniform context of spiral arms.

  2. China's large-scale power shortages of 2004 and 2011 after the electricity market reforms of 2002: Explanations and differences

    International Nuclear Information System (INIS)

    Ming, Zeng; Song, Xue; Lingyun, Li; Yuejin, Wang; Yang, Wei; Ying, Li

    2013-01-01

    Since the electricity market reforms of 2002, two large-scale power shortages, one occurring in 2004 and one in 2011, exerted a tremendous impact on the economic development of China and also gave rise to a fierce discussion regarding electricity system reforms. In this paper, the background and the influence scale of the two power shortages are described. Second, reasons for these two large-scale power shortages are analyzed from the perspectives of power generation, power consumption and coordination of power sources and grid network construction investments. Characteristics of these two large-scale power shortages are then summarized by comparatively analyzing the performance and the formation of the reasons behind these two large-scale power shortages. Finally, some effective measures that take into account the current status of electricity market reforms in China are suggested. This paper concludes that to eliminate power shortages in China, both the supply and the demand should be considered, and these considerations should be accompanied by supervisory policies and incentive mechanisms. - Highlights: • Reasons of these two large-scale power shortages are analyzed. • Characteristics of these two large-scale power shortages are summarized. • Some effective measures to eliminate power shortage are suggested

  3. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    Science.gov (United States)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues

  4. Technology of electron beam welding for Zr-4 alloy spacer grid

    International Nuclear Information System (INIS)

    Pei Qiusheng; Wu Xueyi; Yang Qishun

    1989-10-01

    The welding technology for Zr-4 alloy spacer grid by using vacuum electron beam was studied. Through a series of welding technological experiments, metallographic examinations of seam structure and detecting tests for welding defect by X-ray defectoscopy, a good welding technology was selected to meet the requirements. The experimental results indicated that the Zr-4 alloy spacer grid welded by vacuum electron beam welding is feasible

  5. An insulating grid spacer for large-area MICROMEGAS chambers

    CERN Document Server

    Bernard, D; D'Enterria, D G; Le Guay, M; Martínez, G; Mora, M J; Pichot, P; Roy, D; Schutz, Y; Gandi, A; De Oliveira, R

    2002-01-01

    We present an original design for large area gaseous detectors based on the MICROMEGAS technology. This technology incorporates an insulating grid, sandwiched between the micro-mesh and the anode-pad plane, which provides an uniform 200 $\\mu$m amplification gap. The uniformity of the amplification gap thickness has been verified under several experimental conditions. The gain performances of the detector are presented and compared to the values obtained with detectors using cylindrical micro spacers. The new design presents several technical and financial advantages.

  6. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  7. How entorhinal grid cells may learn multiple spatial scales from a dorsoventral gradient of cell response rates in a self-organizing map.

    Directory of Open Access Journals (Sweden)

    Stephen Grossberg

    Full Text Available Place cells in the hippocampus of higher mammals are critical for spatial navigation. Recent modeling clarifies how this may be achieved by how grid cells in the medial entorhinal cortex (MEC input to place cells. Grid cells exhibit hexagonal grid firing patterns across space in multiple spatial scales along the MEC dorsoventral axis. Signals from grid cells of multiple scales combine adaptively to activate place cells that represent much larger spaces than grid cells. But how do grid cells learn to fire at multiple positions that form a hexagonal grid, and with spatial scales that increase along the dorsoventral axis? In vitro recordings of medial entorhinal layer II stellate cells have revealed subthreshold membrane potential oscillations (MPOs whose temporal periods, and time constants of excitatory postsynaptic potentials (EPSPs, both increase along this axis. Slower (faster subthreshold MPOs and slower (faster EPSPs correlate with larger (smaller grid spacings and field widths. A self-organizing map neural model explains how the anatomical gradient of grid spatial scales can be learned by cells that respond more slowly along the gradient to their inputs from stripe cells of multiple scales, which perform linear velocity path integration. The model cells also exhibit MPO frequencies that covary with their response rates. The gradient in intrinsic rhythmicity is thus not compelling evidence for oscillatory interference as a mechanism of grid cell firing. A response rate gradient combined with input stripe cells that have normalized receptive fields can reproduce all known spatial and temporal properties of grid cells along the MEC dorsoventral axis. This spatial gradient mechanism is homologous to a gradient mechanism for temporal learning in the lateral entorhinal cortex and its hippocampal projections. Spatial and temporal representations may hereby arise from homologous mechanisms, thereby embodying a mechanistic "neural relativity" that

  8. ImmunoGrid, an integrative environment for large-scale simulation of the immune system for vaccine discovery, design and optimization

    DEFF Research Database (Denmark)

    Pappalardo, F.; Halling-Brown, M. D.; Rapin, Nicolas

    2009-01-01

    conceptual models of the immune system, models of antigen processing and presentation, system-level models of the immune system, Grid computing, and database technology to facilitate discovery, formulation and optimization of vaccines. ImmunoGrid modules share common conceptual models and ontologies......Vaccine research is a combinatorial science requiring computational analysis of vaccine components, formulations and optimization. We have developed a framework that combines computational tools for the study of immune function and vaccine development. This framework, named ImmunoGrid combines...

  9. Toward a Grid Work flow Formal Composition

    International Nuclear Information System (INIS)

    Hlaoui, Y. B.; BenAyed, L. J.

    2007-01-01

    This paper exposes a new approach for the composition of grid work flow models. This approach proposes an abstract syntax for the UML Activity Diagrams (UML-AD) and a formal foundation for grid work flow composition in form of a work flow algebra based on UML-AD. This composition fulfils the need for collaborative model development particularly the specification and the reduction of the complexity of grid work flow model verification. This complexity has arisen with the increase in scale of grid work flow applications such as science and e-business applications since large amounts of computational resources are required and multiple parties could be involved in the development process and in the use of grid work flows. Furthermore, the proposed algebra allows the definition of work flow views which are useful to limit the access to predefined users in order to ensure the security of grid work flow applications. (Author)

  10. ImmunoGrid: towards agent-based simulations of the human immune system at a natural scale

    DEFF Research Database (Denmark)

    Halling-Brown, M.; Pappalardo, F.; Rapin, Nicolas

    2010-01-01

    , such as the European Virtual Physiological Human initiative. Finally, we ask a key question: How long will it take us to resolve these challenges and when can we expect to have fully functional models that will deliver health-care benefits in the form of personalized care solutions and improved disease prevention?......The ultimate aim of the EU-funded ImmunoGrid project is to develop a natural-scale model of the human immune system-that is, one that reflects both the diversity and the relative proportions of the molecules and cells that comprise it-together with the grid infrastructure necessary to apply...... this model to specific applications in the field of immunology. These objectives present the ImmunoGrid Consortium with formidable challenges in terms of complexity of the immune system, our partial understanding about how the immune system works, the lack of reliable data and the scale of computational...

  11. The Grid

    CERN Document Server

    Klotz, Wolf-Dieter

    2005-01-01

    Grid technology is widely emerging. Grid computing, most simply stated, is distributed computing taken to the next evolutionary level. The goal is to create the illusion of a simple, robust yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources. This talk will give a short history how, out of lessons learned from the Internet, the vision of Grids was born. Then the extensible anatomy of a Grid architecture will be discussed. The talk will end by presenting a selection of major Grid projects in Europe and US and if time permits a short on-line demonstration.

  12. Large scale solvothermal synthesis and a strategy to obtain stable Langmuir–Blodgett film of CoFe2O4 nanoparticles

    International Nuclear Information System (INIS)

    Thampi, Arya; Babu, Keerthi; Verma, Seema

    2013-01-01

    Highlights: • Large scale, monodisperse CoFe 2 O 4 nanoparticles by solvothermal route. • LB technique to obtain stable film of CoFe 2 O 4 nanoparticles over a large area. • Hydrophobicity of substrate was enhanced utilizing LB films of cadmium arachidate. • P–A isotherm and AFM cross sectional height profile analysis confirms stability. • Large scale organization of nanoparticles for surface pressure higher than 15 mN/m. -- Abstract: Nearly monodisperse oleic acid coated cobalt ferrite nanoparticles were synthesized in large scale by a simple solvothermal method utilizing N-methyl 2-Pyrrolidone (NMP) as a high boiling solvent. The magnetic oxide was further investigated by X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), transmission electron microscopy (TEM), high resolution transmission electron microscopy (HRTEM) and vibrating sample magnetometer (VSM). Langmuir–Blodgett (LB) technique is discussed to obtain a 2D assembly of oleic acid coated CoFe 2 O 4 nanoparticles over a large area. We describe a method to obtain stable, condensed three layers of cadmium arachidate on a piranha treated glass substrate. The hydrophobic surface thus obtained was subsequently used for forming a stable monolayer of oleic acid stabilized cobalt ferrite nanoparticles at the air–water interface. The stability of the LB films at the air–water interface was studied by pressure–area isotherm curves and atomic force microscopy (AFM) cross sectional height profile analysis. 2D organization of the magnetic nanoparticles at different surface pressures was studied by TEM. Preparation of large area LB films of CoFe 2 O 4 nanoparticles is reported for a surface pressure more than 15 mN/m

  13. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  14. Large scale synthesis of α-Si3N4 nanowires through a kinetically favored chemical vapour deposition process

    Science.gov (United States)

    Liu, Haitao; Huang, Zhaohui; Zhang, Xiaoguang; Fang, Minghao; Liu, Yan-gai; Wu, Xiaowen; Min, Xin

    2018-01-01

    Understanding the kinetic barrier and driving force for crystal nucleation and growth is decisive for the synthesis of nanowires with controllable yield and morphology. In this research, we developed an effective reaction system to synthesize very large scale α-Si3N4 nanowires (hundreds of milligrams) and carried out a comparative study to characterize the kinetic influence of gas precursor supersaturation and liquid metal catalyst. The phase composition, morphology, microstructure and photoluminescence properties of the as-synthesized products were characterized by X-ray diffraction, fourier-transform infrared spectroscopy, field emission scanning electron microscopy, transmission electron microscopy and room temperature photoluminescence measurement. The yield of the products not only relates to the reaction temperature (thermodynamic condition) but also to the distribution of gas precursors (kinetic condition). As revealed in this research, by controlling the gas diffusion process, the yield of the nanowire products could be greatly improved. The experimental results indicate that the supersaturation is the dominant factor in the as-designed system rather than the catalyst. With excellent non-flammability and high thermal stability, the large scale α-Si3N4 products would have potential applications to the improvement of strength of high temperature ceramic composites. The photoluminescence spectrum of the α-Si3N4 shows a blue shift which could be valued for future applications in blue-green emitting devices. There is no doubt that the large scale products are the base of these applications.

  15. A large scale test of the gaming-enhancement hypothesis

    Directory of Open Access Journals (Sweden)

    Andrew K. Przybylski

    2016-11-01

    Full Text Available A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.

  16. A large scale test of the gaming-enhancement hypothesis.

    Science.gov (United States)

    Przybylski, Andrew K; Wang, John C

    2016-01-01

    A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.

  17. Large Scale Integration of Renewable Power Sources into the Vietnamese Power System

    Science.gov (United States)

    Kies, Alexander; Schyska, Bruno; Thanh Viet, Dinh; von Bremen, Lueder; Heinemann, Detlev; Schramm, Stefan

    2017-04-01

    The Vietnamese Power system is expected to expand considerably in upcoming decades. Power capacities installed are projected to grow from 39 GW in 2015 to 129.5 GW by 2030. Installed wind power capacities are expected to grow to 6 GW (0.8 GW 2015) and solar power capacities to 12 GW (0.85 GW 2015). This goes hand in hand with an increase of the renewable penetration in the power mix from 1.3% from wind and photovoltaics (PV) in 2015 to 5.4% by 2030. The overall potential for wind power in Vietnam is estimated to be around 24 GW. Moreover, the up-scaling of renewable energy sources was formulated as one of the priorized targets of the Vietnamese government in the National Power Development Plan VII. In this work, we investigate the transition of the Vietnamese power system towards high shares of renewables. For this purpose, we jointly optimise the expansion of renewable generation facilities for wind and PV, and the transmission grid within renewable build-up pathways until 2030 and beyond. To simulate the Vietnamese power system and its generation from renewable sources, we use highly spatially and temporally resolved historical weather and load data and the open source modelling toolbox Python for Power System Analysis (PyPSA). We show that the highest potential of renewable generation for wind and PV is observed in southern Vietnam and discuss the resulting need for transmission grid extensions in dependency of the optimal pathway. Furthermore, we show that the smoothing effect of wind power has several considerable beneficial effects and that the Vietnamese hydro power potential can be efficiently used to provide balancing opportunities. This work is part of the R&D Project "Analysis of the Large Scale Integration of Renewable Power into the Future Vietnamese Power System" (GIZ, 2016-2018).

  18. Analyzing Sustainable Energy Opportunities for a Small Scale Off-Grid Facility: A Case Study at Experimental Lakes Area (ELA), Ontario

    Science.gov (United States)

    Duggirala, Bhanu

    This thesis explored the opportunities to reduce energy demand and renewable energy feasibility at an off-grid science "community" called the Experimental Lakes Area (ELA) in Ontario. Being off-grid, ELA is completely dependent on diesel and propane fuel supply for all its electrical and heating needs, which makes ELA vulnerable to fluctuating fuel prices. As a result ELA emits a large amount of greenhouse gases (GHG) for its size. Energy efficiency and renewable energy technologies can reduce energy consumption and consequently energy cost, as well as GHG. Energy efficiency was very important to ELA due to the elevated fuel costs at this remote location. Minor upgrades to lighting, equipment and building envelope were able to reduce energy costs and reduce load. Efficient energy saving measures were recommended that save on operating and maintenance costs, namely, changing to LED lights, replacing old equipment like refrigerators and downsizing of ice makers. This resulted in a 4.8% load reduction and subsequently reduced the initial capital cost for biomass by 27,000, by 49,500 for wind power and by 136,500 for solar power. Many alternative energies show promise as potential energy sources to reduce the diesel and propane consumption at ELA including wind energy, solar heating and biomass. A biomass based CHP system using the existing diesel generators as back-up has the shortest pay back period of the technologies modeled. The biomass based CHP system has a pay back period of 4.1 years at 0.80 per liter of diesel, as diesel price approaches $2.00 per liter the pay back period reduces to 0.9 years, 50% the generation cost compared to present generation costs. Biomass has been successfully tried and tested in many off-grid communities particularly in a small-scale off-grid setting in North America and internationally. Also, the site specific solar and wind data show that ELA has potential to harvest renewable resources and produce heat and power at competitive

  19. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    Science.gov (United States)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  20. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  1. The CrossGrid project

    International Nuclear Information System (INIS)

    Kunze, M.

    2003-01-01

    There are many large-scale problems that require new approaches to computing, such as earth observation, environmental management, biomedicine, industrial and scientific modeling. The CrossGrid project addresses realistic problems in medicine, environmental protection, flood prediction, and physics analysis and is oriented towards specific end-users: Medical doctors, who could obtain new tools to help them to obtain correct diagnoses and to guide them during operations; industries, that could be advised on the best timing for some critical operations involving risk of pollution; flood crisis teams, that could predict the risk of a flood on the basis of historical records and actual hydrological and meteorological data; physicists, who could optimize the analysis of massive volumes of data distributed across countries and continents. Corresponding applications will be based on Grid technology and could be complex and difficult to use: the CrossGrid project aims at developing several tools that will make the Grid more friendly for average users. Portals for specific applications will be designed, that should allow for easy connection to the Grid, create a customized work environment, and provide users with all necessary information to get their job done

  2. Simulating multi-scale oceanic processes around Taiwan on unstructured grids

    Science.gov (United States)

    Yu, Hao-Cheng; Zhang, Yinglong J.; Yu, Jason C. S.; Terng, C.; Sun, Weiling; Ye, Fei; Wang, Harry V.; Wang, Zhengui; Huang, Hai

    2017-11-01

    We validate a 3D unstructured-grid (UG) model for simulating multi-scale processes as occurred in Northwestern Pacific around Taiwan using recently developed new techniques (Zhang et al., Ocean Modeling, 102, 64-81, 2016) that require no bathymetry smoothing even for this region with prevalent steep bottom slopes and many islands. The focus is on short-term forecast for several months instead of long-term variability. Compared with satellite products, the errors for the simulated Sea-surface Height (SSH) and Sea-surface Temperature (SST) are similar to a reference data-assimilated global model. In the nearshore region, comparison with 34 tide gauges located around Taiwan indicates an average RMSE of 13 cm for the tidal elevation. The average RMSE for SST at 6 coastal buoys is 1.2 °C. The mean transport and eddy kinetic energy compare reasonably with previously published values and the reference model used to provide boundary and initial conditions. The model suggests ∼2-day interruption of Kuroshio east of Taiwan during a typhoon period. The effect of tidal mixing is shown to be significant nearshore. The multi-scale model is easily extendable to target regions of interest due to its UG framework and a flexible vertical gridding system, which is shown to be superior to terrain-following coordinates.

  3. Diverse assemblies of the (4,4) grid layers exemplified in Zn(II)/Co(II) coordination polymers with dual linear ligands

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Guang-Zhen; Li, Xiao-Dong; Xin, Ling-Yun; Li, Xiao-Ling [College of Chemistry and Chemical Engineering, Luoyang Normal University, Luoyang, Henan 471022 (China); Wang, Li-Ya, E-mail: wlya@lynu.edu.cn [College of Chemistry and Chemical Engineering, Luoyang Normal University, Luoyang, Henan 471022 (China); College of Chemistry and Pharmacy Engineering, Nanyang Normal University, Nanyang, Henan 473061 (China)

    2013-07-15

    Diverse (4,4) grid layers are exemplified in five two-dimensional coordination polymers with dual µ{sub 2}-bridged ligands, namely, ([Zn(cbaa)(bpp)]·H{sub 2}O){sub n} (1), [Zn{sub 2}(cbaa){sub 2}(bpy)]{sub n} (2), [Co{sub 2}(cbaa){sub 2}(bpp){sub 2}]{sub n} (3), [Co(cbaa)(bpp)]{sub n} (4), and [Co(bdaa)(bpp)(H{sub 2}O){sub 2}]{sub n} (5) (H{sub 2}cbaa=4-carboxybenzeneacetic acid, bpp=1,3-di(4-pyridyl)propane, bpy=4,4′-bipyridyl, and H{sub 2}bdaa=1,4-benzenediacrylic acid). For 1, two (4,4) grid layers with [ZnN{sub 2}O{sub 2}] tetrahedron as the node are held together by lattice water forming a H-bonding bilayer. Individual (4,4) grid layer in 2 is based on (Zn{sub 2}(OCO){sub 4}) paddlewheel unit as the node. Two (4,4) grid layers with (Co{sub 2}O(OCO){sub 2}) dimer as the node are covalently interconnected by organic ligands affording a thick bilayer of 3 with new framework topology. The different entanglements between two coincident (4,4) grid layers with [CoN{sub 2}O{sub 4}] octahedron as the node leads to two 2D→2D interpenetrated structures for 4 and 5. Furthermore, fluorescent properties of 1 and 2 as well as magnetic properties of 3 are investigated. - Graphical abstract: Diverse assemblies of the (4,4) grid layers with different network nodes forms five coordination polymers that are well characterized by IR, TGA, element analysis, fluorescent and magnetic measurement. - Highlights: • Diverse assemblies of the (4,4) grid layers with different structural units as the nodes. • A new topology type with the uninodal 6-connected net of (4{sup 12}.5{sup 2}.6) is found. • Intense fluorescence emissions with a rare blue-shift of 55 nm compared to free carboxylate ligand.

  4. Hybrid method based on embedded coupled simulation of vortex particles in grid based solution

    Science.gov (United States)

    Kornev, Nikolai

    2017-09-01

    The paper presents a novel hybrid approach developed to improve the resolution of concentrated vortices in computational fluid mechanics. The method is based on combination of a grid based and the grid free computational vortex (CVM) methods. The large scale flow structures are simulated on the grid whereas the concentrated structures are modeled using CVM. Due to this combination the advantages of both methods are strengthened whereas the disadvantages are diminished. The procedure of the separation of small concentrated vortices from the large scale ones is based on LES filtering idea. The flow dynamics is governed by two coupled transport equations taking two-way interaction between large and fine structures into account. The fine structures are mapped back to the grid if their size grows due to diffusion. Algorithmic aspects of the hybrid method are discussed. Advantages of the new approach are illustrated on some simple two dimensional canonical flows containing concentrated vortices.

  5. Power grid operation risk management: V2G deployment for sustainable development

    Science.gov (United States)

    Haddadian, Ghazale J.

    The production, transmission, and delivery of cost--efficient energy to supply ever-increasing peak loads along with a quest for developing a low-carbon economy require significant evolutions in the power grid operations. Lower prices of vast natural gas resources in the United States, Fukushima nuclear disaster, higher and more intense energy consumptions in China and India, issues related to energy security, and recent Middle East conflicts, have urged decisions makers throughout the world to look into other means of generating electricity locally. As the world look to combat climate changes, a shift from carbon-based fuels to non-carbon based fuels is inevitable. However, the variability of distributed generation assets in the electricity grid has introduced major reliability challenges for power grid operators. While spearheading sustainable and reliable power grid operations, this dissertation develops a multi-stakeholder approach to power grid operation design; aiming to address economic, security, and environmental challenges of the constrained electricity generation. It investigates the role of Electric Vehicle (EV) fleets integration, as distributed and mobile storage assets to support high penetrations of renewable energy sources, in the power grid. The vehicle-to-grid (V2G) concept is considered to demonstrate the bidirectional role of EV fleets both as a provider and consumer of energy in securing a sustainable power grid operation. The proposed optimization modeling is the application of Mixed-Integer Linear Programing (MILP) to large-scale systems to solve the hourly security-constrained unit commitment (SCUC) -- an optimal scheduling concept in the economic operation of electric power systems. The Monte Carlo scenario-based approach is utilized to evaluate different scenarios concerning the uncertainties in the operation of power grid system. Further, in order to expedite the real-time solution of the proposed approach for large-scale power systems

  6. PET regularization by envelope guided conjugate gradients

    International Nuclear Information System (INIS)

    Kaufman, L.; Neumaier, A.

    1996-01-01

    The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations

  7. Optimal Capacity Allocation of Large-Scale Wind-PV-Battery Units

    Directory of Open Access Journals (Sweden)

    Kehe Wu

    2014-01-01

    Full Text Available An optimal capacity allocation of large-scale wind-photovoltaic- (PV- battery units was proposed. First, an output power model was established according to meteorological conditions. Then, a wind-PV-battery unit was connected to the power grid as a power-generation unit with a rated capacity under a fixed coordinated operation strategy. Second, the utilization rate of renewable energy sources and maximum wind-PV complementation was considered and the objective function of full life cycle-net present cost (NPC was calculated through hybrid iteration/adaptive hybrid genetic algorithm (HIAGA. The optimal capacity ratio among wind generator, PV array, and battery device also was calculated simultaneously. A simulation was conducted based on the wind-PV-battery unit in Zhangbei, China. Results showed that a wind-PV-battery unit could effectively minimize the NPC of power-generation units under a stable grid-connected operation. Finally, the sensitivity analysis of the wind-PV-battery unit demonstrated that the optimization result was closely related to potential wind-solar resources and government support. Regions with rich wind resources and a reasonable government energy policy could improve the economic efficiency of their power-generation units.

  8. Hydrogen-Bromine Flow Battery: Hydrogen Bromine Flow Batteries for Grid Scale Energy Storage

    Energy Technology Data Exchange (ETDEWEB)

    None

    2010-10-01

    GRIDS Project: LBNL is designing a flow battery for grid storage that relies on a hydrogen-bromine chemistry which could be more efficient, last longer and cost less than today’s lead-acid batteries. Flow batteries are fundamentally different from traditional lead-acid batteries because the chemical reactants that provide their energy are stored in external tanks instead of inside the battery. A flow battery can provide more energy because all that is required to increase its storage capacity is to increase the size of the external tanks. The hydrogen-bromine reactants used by LBNL in its flow battery are inexpensive, long lasting, and provide power quickly. The cost of the design could be well below $100 per kilowatt hour, which would rival conventional grid-scale battery technologies.

  9. Recent development of the Multi-Grid detector for large area neutron scattering instruments

    International Nuclear Information System (INIS)

    Guerard, Bruno

    2015-01-01

    Most of the Neutron Scattering facilities are committed in a continuous program of modernization of their instruments, requiring large area and high performance thermal neutron detectors. Beside scintillators detectors, 3 He detectors, like linear PSDs (Position Sensitive Detectors) and MWPCs (Multi-Wires Proportional Chambers), are the most current techniques nowadays. Time Of Flight instruments are using 3 He PSDs mounted side by side to cover tens of m 2 . As a result of the so-called ' 3 He shortage crisis , the volume of 3He which is needed to build one of these instruments is not accessible anymore. The development of alternative techniques requiring no 3He, has been given high priority to secure the future of neutron scattering instrumentation. This is particularly important in the context where the future ESS (European Spallation Source) will start its operation in 2019-2020. Improved scintillators represent one of the alternative techniques. Another one is the Multi-Grid introduced at the ILL in 2009. A Multi-Grid detector is composed of several independent modules of typically 0.8 m x 3 m sensitive area, mounted side by side in air or in a vacuum TOF chamber. One module is composed of segmented boron-lined proportional counters mounted in a gas vessel; the counters, of square section, are assembled with Aluminium grids electrically insulated and stacked together. This design provides two advantages: First, magnetron sputtering techniques can be used to coat B 4 C films on planar substrates, and second, the neutron position along the anode wires can be measured by reading out individually the grid signals with fast shaping amplifiers followed by comparators. Unlike charge division localisation in linear PSDs, the individual readout of the grids allows operating the Multi-Grid at a low amplification gain, hence this detector is tolerant to mechanical defects and its production accessible to laboratories equipped with standard equipment. Prototypes of

  10. Recent development of the Multi-Grid detector for large area neutron scattering instruments

    Energy Technology Data Exchange (ETDEWEB)

    Guerard, Bruno [ILL-ESS-LiU collaboration, CRISP project, Institut Laue Langevin - ILL, Grenoble (France)

    2015-07-01

    Most of the Neutron Scattering facilities are committed in a continuous program of modernization of their instruments, requiring large area and high performance thermal neutron detectors. Beside scintillators detectors, {sup 3}He detectors, like linear PSDs (Position Sensitive Detectors) and MWPCs (Multi-Wires Proportional Chambers), are the most current techniques nowadays. Time Of Flight instruments are using {sup 3}He PSDs mounted side by side to cover tens of m{sup 2}. As a result of the so-called '{sup 3}He shortage crisis{sup ,} the volume of 3He which is needed to build one of these instruments is not accessible anymore. The development of alternative techniques requiring no 3He, has been given high priority to secure the future of neutron scattering instrumentation. This is particularly important in the context where the future ESS (European Spallation Source) will start its operation in 2019-2020. Improved scintillators represent one of the alternative techniques. Another one is the Multi-Grid introduced at the ILL in 2009. A Multi-Grid detector is composed of several independent modules of typically 0.8 m x 3 m sensitive area, mounted side by side in air or in a vacuum TOF chamber. One module is composed of segmented boron-lined proportional counters mounted in a gas vessel; the counters, of square section, are assembled with Aluminium grids electrically insulated and stacked together. This design provides two advantages: First, magnetron sputtering techniques can be used to coat B{sub 4}C films on planar substrates, and second, the neutron position along the anode wires can be measured by reading out individually the grid signals with fast shaping amplifiers followed by comparators. Unlike charge division localisation in linear PSDs, the individual readout of the grids allows operating the Multi-Grid at a low amplification gain, hence this detector is tolerant to mechanical defects and its production accessible to laboratories equipped with standard

  11. Data security on the national fusion grid

    Energy Technology Data Exchange (ETDEWEB)

    Burruss, Justine R.; Fredian, Tom W.; Thompson, Mary R.

    2005-06-01

    The National Fusion Collaboratory project is developing and deploying new distributed computing and remote collaboration technologies with the goal of advancing magnetic fusion energy research. This work has led to the development of the US Fusion Grid (FusionGrid), a computational grid composed of collaborative, compute, and data resources from the three large US fusion research facilities and with users both in the US and in Europe. Critical to the development of FusionGrid was the creation and deployment of technologies to ensure security in a heterogeneous environment. These solutions to the problems of authentication, authorization, data transfer, and secure data storage, as well as the lessons learned during the development of these solutions, may be applied outside of FusionGrid and scale to future computing infrastructures such as those for next-generation devices like ITER.

  12. Data security on the national fusion grid

    International Nuclear Information System (INIS)

    Burruss, Justine R.; Fredian, Tom W.; Thompson, Mary R.

    2005-01-01

    The National Fusion Collaboratory project is developing and deploying new distributed computing and remote collaboration technologies with the goal of advancing magnetic fusion energy research. This work has led to the development of the US Fusion Grid (FusionGrid), a computational grid composed of collaborative, compute, and data resources from the three large US fusion research facilities and with users both in the US and in Europe. Critical to the development of FusionGrid was the creation and deployment of technologies to ensure security in a heterogeneous environment. These solutions to the problems of authentication, authorization, data transfer, and secure data storage, as well as the lessons learned during the development of these solutions, may be applied outside of FusionGrid and scale to future computing infrastructures such as those for next-generation devices like ITER

  13. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  14. Large-Scale Environment Properties of Narrow-Line Seyfert 1 Galaxies at z < 0.4

    Energy Technology Data Exchange (ETDEWEB)

    Järvelä, Emilia [Metsähovi Radio Observatory, Aalto University, Espoo (Finland); Department of Electronics and Nanoengineering, Aalto University, Espoo (Finland); Lähteenmäki, A. [Metsähovi Radio Observatory, Aalto University, Espoo (Finland); Department of Electronics and Nanoengineering, Aalto University, Espoo (Finland); Tartu Observatory, Tõravere (Estonia); Lietzen, H., E-mail: emilia.jarvela@aalto.fi [Tartu Observatory, Tõravere (Estonia)

    2017-11-30

    The large-scale environment is believed to affect the evolution and intrinsic properties of galaxies. It offers a new perspective on narrow-line Seyfert 1 galaxies (NLS1) which have not been extensively studied in this context before. We study a large and diverse sample of 960 NLS1 galaxies using a luminosity-density field constructed using Sloan Digital Sky Survey. We investigate how the large-scale environment is connected to the properties of NLS1 galaxies, especially their radio loudness. Furthermore, we compare the large-scale environment properties of NLS1 galaxies with other active galactic nuclei (AGN) classes, for example, other jetted AGN and broad-line Seyfert 1 (BLS1) galaxies, to shed light on their possible relations. In general NLS1 galaxies reside in less dense large-scale environments than any of our comparison samples, thus supporting their young age. The average luminosity-density and distribution to different luminosity-density regions of NLS1 sources is significantly different compared to BLS1 galaxies. This contradicts the simple orientation-based unification of NLS1 and BLS1 galaxies, and weakens the hypothesis that BLS1 galaxies are the parent population of NLS1 galaxies. The large-scale environment density also has an impact on the intrinsic properties of NLS1 galaxies; the radio loudness increases with the increasing luminosity-density. However, our results suggest that the NLS1 population is indeed heterogeneous, and that a considerable fraction of them are misclassified. We support a suggested description that the traditional classification based on the radio loudness should be replaced with the division to jetted and non-jetted sources.

  15. Development of a large-scale general purpose two-phase flow analysis code

    International Nuclear Information System (INIS)

    Terasaka, Haruo; Shimizu, Sensuke

    2001-01-01

    A general purpose three-dimensional two-phase flow analysis code has been developed for solving large-scale problems in industrial fields. The code uses a two-fluid model to describe the conservation equations for two-phase flow in order to be applicable to various phenomena. Complicated geometrical conditions are modeled by FAVOR method in structured grid systems, and the discretization equations are solved by a modified SIMPLEST scheme. To reduce computing time a matrix solver for the pressure correction equation is parallelized with OpenMP. Results of numerical examples show that the accurate solutions can be obtained efficiently and stably. (author)

  16. AGIS: The ATLAS Grid Information System

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  17. 11th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing

    CERN Document Server

    Barolli, Leonard; Amato, Flora

    2017-01-01

    P2P, Grid, Cloud and Internet computing technologies have been very fast established as breakthrough paradigms for solving complex problems by enabling aggregation and sharing of an increasing variety of distributed computational resources at large scale. The aim of this volume is to provide latest research findings, innovative research results, methods and development techniques from both theoretical and practical perspectives related to P2P, Grid, Cloud and Internet computing as well as to reveal synergies among such large scale computing paradigms. This proceedings volume presents the results of the 11th International Conference on P2P, Parallel, Grid, Cloud And Internet Computing (3PGCIC-2016), held November 5-7, 2016, at Soonchunhyang University, Asan, Korea.

  18. McRunjob: A High Energy Physics Workflow Planner for Grid Production Processing

    OpenAIRE

    Graham, G E; Evans, D; Bertram, I

    2003-01-01

    McRunjob is a powerful grid workflow manager used to manage the generation of large numbers of production processing jobs in High Energy Physics. In use at both the DZero and CMS experiments, McRunjob has been used to manage large Monte Carlo production processing since 1999 and is being extended to uses in regular production processing for analysis and reconstruction. Described at CHEP 2001, McRunjob converts core metadata into jobs submittable in a variety of environments. The powerful core...

  19. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  20. The Neutron Science TeraGrid Gateway, a TeraGrid Science Gateway to Support the Spallation Neutron Source

    International Nuclear Information System (INIS)

    Cobb, John W.; Geist, Al; Kohl, James Arthur; Miller, Stephen D; Peterson, Peter F.; Pike, Gregory; Reuter, Michael A; Swain, William; Vazhkudai, Sudharshan S.; Vijayakumar, Nithya N.

    2006-01-01

    The National Science Foundation's (NSF's) Extensible Terascale Facility (ETF), or TeraGrid (1) is entering its operational phase. An ETF science gateway effort is the Neutron Science TeraGrid Gateway (NSTG.) The Oak Ridge National Laboratory (ORNL) resource provider effort (ORNL-RP) during construction and now in operations is bridging a large scale experimental community and the TeraGrid as a large-scale national cyberinfrastructure. Of particular emphasis is collaboration with the Spallation Neutron Source (SNS) at ORNL. The U.S. Department of Energy's (DOE's) SNS (2) at ORNL will be commissioned in spring of 2006 as the world's brightest source of neutrons. Neutron science users can run experiments, generate datasets, perform data reduction, analysis, visualize results; collaborate with remotes users; and archive long term data in repositories with curation services. The ORNL-RP and the SNS data analysis group have spent 18 months developing and exploring user requirements, including the creation of prototypical services such as facility portal, data, and application execution services. We describe results from these efforts and discuss implications for science gateway creation. Finally, we show incorporation into implementation planning for the NSTG and SNS architectures. The plan is for a primarily portal-based user interaction supported by a service oriented architecture for functional implementation

  1. Clustering, randomness, and regularity in cloud fields. 4: Stratocumulus cloud fields

    Science.gov (United States)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-01-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (more than 900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  2. Clustering, randomness, and regularity in cloud fields. 4. Stratocumulus cloud fields

    Science.gov (United States)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-07-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (>900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  3. General Forced Oscillations in a Real Power Grid Integrated with Large Scale Wind Power

    OpenAIRE

    Ping Ju; Yongfei Liu; Feng Wu; Fei Dai; Yiping Yu

    2016-01-01

    According to the monitoring of the wide area measurement system, inter-area oscillations happen more and more frequently in a real power grid of China, which are close to the forced oscillation. Applying the conventional forced oscillation theory, the mechanism of these oscillations cannot be explained well, because the oscillations vary with random amplitude and a narrow frequency band. To explain the mechanism of such oscillations, the general forced oscillation (GFO) mechanism is taken int...

  4. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    International Nuclear Information System (INIS)

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K.

    2013-01-01

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N par . One of the main goals of the present paper is to determine how large N par can be, while still maintaining reasonable computational efficiency; we find that N par = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme

  5. Coarse grid simulation of bed expansion characteristics of industrial-scale gas–solid bubbling fluidized beds

    NARCIS (Netherlands)

    Wang, J.; van der Hoef, Martin Anton; Kuipers, J.A.M.

    2010-01-01

    Two-fluid modeling of the hydrodynamics of industrial-scale gas-fluidized beds proves a long-standing challenge for both engineers and scientists. In this study, we suggest a simple method to modify currently available drag correlations to allow for the effect of unresolved sub-grid scale

  6. Large Signal Stabilization of Hybrid AC/DC Micro-Grids Using Nonlinear Robust Controller

    Directory of Open Access Journals (Sweden)

    Reza Pejmanfar

    2017-12-01

    Full Text Available This paper presents a robust nonlinear integrated controller to improve stability of hybrid AC/DC micro-grids under islanding mode. The proposed controller includes two independent controllers where each one is responsible to control one part of the system. First controller will improve the stability of input DC/DC converter. Using this controller, the voltage of DC bus is fully stabilized such that when a large disturbance occurs, its voltage will become constant without any significant dynamic. The necessity of DC bus regulation which has not been considered in previous studies, is imminent as it not only improves voltage stability of the micro-grid but also protects consumers which are directly connected to the DC bus, against voltage variations. Frequency stability of the micro-grid is provided by the second proposed controller which is applied to output DC/AC converter of the micro-grid. Adaptive method is used to make the controllers proposed in this paper, robust. Duty cycle of converters switches are adjusted such that voltage and frequency of the micro-grid are set on the desired value in minimum possible time under transient disturbances and uncertainty of the loads as well as micro-sources characteristics.

  7. Vehicle-to-Grid Automatic Load Sharing with Driver Preference in Micro-Grids

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yubo; Nazaripouya, Hamidreza; Chu, Chi-Cheng; Gadh, Rajit; Pota, Hemanshu R.

    2014-10-15

    Integration of Electrical Vehicles (EVs) with power grid not only brings new challenges for load management, but also opportunities for distributed storage and generation. This paper comprehensively models and analyzes distributed Vehicle-to-Grid (V2G) for automatic load sharing with driver preference. In a micro-grid with limited communications, V2G EVs need to decide load sharing based on their own power and voltage profile. A droop based controller taking into account driver preference is proposed in this paper to address the distributed control of EVs. Simulations are designed for three fundamental V2G automatic load sharing scenarios that include all system dynamics of such applications. Simulation results demonstrate that active power sharing is achieved proportionally among V2G EVs with consideration of driver preference. In additional, the results also verify the system stability and reactive power sharing analysis in system modelling, which sheds light on large scale V2G automatic load sharing in more complicated cases.

  8. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Directory of Open Access Journals (Sweden)

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  9. SMES-UPS for large-scaled SC magnet system of LHD

    International Nuclear Information System (INIS)

    Yamada, Shuichi; Mito, T.; Chikaraishi, H.; Nishimura, A.; Kojima, H.; Nakanishi, Y.; Uede, T.; Satow, T.; Motojima, O.

    2003-01-01

    The LHD is an SC experimental fusion device of heliotron type. Eight sets of the helium compressors with total electric power of 3.5 MW are installed in the cryogenic system. The analytical studies of the SMES-UPS for the compressors under the deep voltage sag are reported in this paper. The amplitude and frequency of the voltage decrease gradually by the regenerating effect of the induction motors. The SMES-UPS system proposed in this report has the following functions; (1) variable frequency control, (2) regulations by ACR and AVR, and (3) rapid isolation and synchronous reconnection from the loads to grid line. We have demonstrated that SMES was useful for the large-scaled cryogenic system of the experimental fusion device

  10. UV caps, IR modification of gravity, and recovery of 4D gravity in regularized braneworlds

    International Nuclear Information System (INIS)

    Kobayashi, Tsutomu

    2008-01-01

    In the context of six-dimensional conical braneworlds we consider a simple and explicit model that incorporates long-distance modification of gravity and regularization of codimension-2 singularities. To resolve the conical singularities we replace the codimension-2 branes with ringlike codimension-1 branes, filling in the interiors with regular caps. The six-dimensional Planck scale in the cap is assumed to be much greater than the bulk Planck scale, which gives rise to the effect analogous to brane-induced gravity. Weak gravity on the regularized brane is studied in the case of a sharp conical bulk. We show by a linear analysis that gravity at short distances is effectively described by the four-dimensional Brans-Dicke theory, while the higher dimensional nature of gravity emerges at long distances. The linear analysis breaks down at some intermediate scale, below which four-dimensional Einstein gravity is shown to be recovered thanks to the second-order effects of the brane bending.

  11. Genus Ranges of 4-Regular Rigid Vertex Graphs.

    Science.gov (United States)

    Buck, Dorothy; Dolzhenko, Egor; Jonoska, Nataša; Saito, Masahico; Valencia, Karin

    2015-01-01

    A rigid vertex of a graph is one that has a prescribed cyclic order of its incident edges. We study orientable genus ranges of 4-regular rigid vertex graphs. The (orientable) genus range is a set of genera values over all orientable surfaces into which a graph is embedded cellularly, and the embeddings of rigid vertex graphs are required to preserve the prescribed cyclic order of incident edges at every vertex. The genus ranges of 4-regular rigid vertex graphs are sets of consecutive integers, and we address two questions: which intervals of integers appear as genus ranges of such graphs, and what types of graphs realize a given genus range. For graphs with 2 n vertices ( n > 1), we prove that all intervals [ a, b ] for all a genus ranges. For graphs with 2 n - 1 vertices ( n ≥ 1), we prove that all intervals [ a, b ] for all a genus ranges. We also provide constructions of graphs that realize these ranges.

  12. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    , telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties.......Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV...

  13. Occurrence and countermeasures of urban power grid accident

    Science.gov (United States)

    Wei, Wang; Tao, Zhang

    2018-03-01

    With the advance of technology, the development of network communication and the extensive use of power grids, people can get to know power grid accidents around the world through the network timely. Power grid accidents occur frequently. Large-scale power system blackout and casualty accidents caused by electric shock are also fairly commonplace. All of those accidents have seriously endangered the property and personal safety of the country and people, and the development of society and economy is severely affected by power grid accidents. Through the researches on several typical cases of power grid accidents at home and abroad in recent years and taking these accident cases as the research object, this paper will analyze the three major factors that cause power grid accidents at present. At the same time, combining with various factors and impacts caused by power grid accidents, the paper will put forward corresponding solutions and suggestions to prevent the occurrence of the accident and lower the impact of the accident.

  14. MCRUNJOB: A High energy physics workflow planner for grid production processing

    International Nuclear Information System (INIS)

    Graham, Gregory E.

    2004-01-01

    McRunjob is a powerful grid workflow manager used to manage the generation of large numbers of production processing jobs in High Energy Physics. In use at both the DZero and CMS experiments, McRunjob has been used to manage large Monte Carlo production processing since 1999 and is being extended to uses in regular production processing for analysis and reconstruction. Described at CHEP 2001, McRunjob converts core metadata into jobs submittable in a variety of environments. The powerful core metadata description language includes methods for converting the metadata into persistent forms, job descriptions, multi-step workflows, and data provenance information. The language features allow for structure in the metadata by including full expressions, namespaces, functional dependencies, site specific parameters in a grid environment, and ontological definitions. It also has simple control structures for parallelization of large jobs. McRunjob features a modular design which allows for easy expansion to new job description languages or new application level tasks

  15. Managing Dynamic User Communities in a Grid of Autonomous Resources

    CERN Document Server

    Alfieri, R; Gianoli, A; Spataro, F; Ciaschini, Vincenzo; dell'Agnello, L; Bonnassieux, F; Broadfoot, P; Lowe, G; Cornwall, L; Jensen, J; Kelsey, D; Frohner, A; Groep, DL; Som de Cerff, W; Steenbakkers, M; Venekamp, G; Kouril, D; McNab, A; Mulmo, O; Silander, M; Hahkala, J; Lhorentey, K

    2003-01-01

    One of the fundamental concepts in Grid computing is the creation of Virtual Organizations (VO's): a set of resource consumers and providers that join forces to solve a common problem. Typical examples of Virtual Organizations include collaborations formed around the Large Hadron Collider (LHC) experiments. To date, Grid computing has been applied on a relatively small scale, linking dozens of users to a dozen resources, and management of these VO's was a largely manual operation. With the advance of large collaboration, linking more than 10000 users with a 1000 sites in 150 counties, a comprehensive, automated management system is required. It should be simple enough not to deter users, while at the same time ensuring local site autonomy. The VO Management Service (VOMS), developed by the EU DataGrid and DataTAG projects[1, 2], is a secured system for managing authorization for users and resources in virtual organizations. It extends the existing Grid Security Infrastructure[3] architecture with embedded VO ...

  16. Socioeconomic assessment of smart grids. Summary

    International Nuclear Information System (INIS)

    2015-07-01

    In September of 2013, the President of France identified smart grids as an important part of the country's industrial strategy, given the opportunities and advantages they can offer French industry, and asked the Chairman of the RTE Management Board to prepare a road-map outlining ways to support and accelerate smart grid development. This road-map, prepared in cooperation with stakeholders from the power and smart grids industries, identifies ten actions that can be taken in priority to consolidate the smart grids sector and help French firms play a leading role in the segment. These priorities were presented to the President of France on 7 May 2014. Action items 5 and 6 of the road-map on smart grid development relate, respectively, to the quantification of the value of smart grid functions from an economic, environmental and social (impact on employment) standpoint and to the large-scale deployment of some of the functions. Two tasks were set out in the 'Smart Grids' plan for action item 5: - Create a methodological framework that, for all advanced functions, allows the quantification of benefits and costs from an economic, environmental and social (effect on jobs) standpoint; - Quantify, based on this methodological framework, the potential benefits of a set of smart grid functions considered sufficiently mature to be deployed on a large scale in the near future. Having a methodology that can be applied in the same manner to all solutions, taking into account their impacts on the environment and employment in France, will considerably add to and complement the information drawn from demonstration projects. It will notably enable comparisons of benefits provided by smart grid functions and thus help give rise to a French smart grids industry that is competitive. At first, the smart grids industry was organised around demonstration projects testing different advanced functions within specific geographic areas. These projects covered a wide enough

  17. Socioeconomic assessment of smart grids - Summary

    International Nuclear Information System (INIS)

    Janssen, Tanguy

    2015-07-01

    In September of 2013, the President of France identified smart grids as an important part of the country's industrial strategy, given the opportunities and advantages they can offer French industry, and asked the Chairman of the RTE Management Board to prepare a road-map outlining ways to support and accelerate smart grid development. This road-map, prepared in cooperation with stakeholders from the power and smart grids industries, identifies ten actions that can be taken in priority to consolidate the smart grids sector and help French firms play a leading role in the segment. These priorities were presented to the President of France on 7 May 2014. Action items 5 and 6 of the road-map on smart grid development relate, respectively, to the quantification of the value of smart grid functions from an economic, environmental and social (impact on employment) standpoint and to the large-scale deployment of some of the functions. Two tasks were set out in the 'Smart Grids' plan for action item 5: - Create a methodological framework that, for all advanced functions, allows the quantification of benefits and costs from an economic, environmental and social (effect on jobs) standpoint; - Quantify, based on this methodological framework, the potential benefits of a set of smart grid functions considered sufficiently mature to be deployed on a large scale in the near future. Having a methodology that can be applied in the same manner to all solutions, taking into account their impacts on the environment and employment in France, will considerably add to and complement the information drawn from demonstration projects. It will notably enable comparisons of benefits provided by smart grid functions and thus help give rise to a French smart grids industry that is competitive. At first, the smart grids industry was organised around demonstration projects testing different advanced functions within specific geographic areas. These projects covered a wide enough

  18. Effect of von Karman Vortex Shedding on Regular and Open-slit V-gutter Stabilized Turbulent Premixed Flames

    Science.gov (United States)

    2012-04-01

    Both flame lengths shrink and large scale disruptions occur downstream with vortex shedding carrying reaction zones. Flames in both flameholders...9) the flame structure changes dramatically for both regular and open-slit V-gutter. Both flame lengths shrink and large scale disruptions occur...reduces the flame length . However, qualitatively the open-slit V-gutter appears to be more sensitive than the regular V-gutter. Both flames remain

  19. High Performance Hydrogen/Bromine Redox Flow Battery for Grid-Scale Energy Storage

    Energy Technology Data Exchange (ETDEWEB)

    Cho, KT; Ridgway, P; Weber, AZ; Haussener, S; Battaglia, V; Srinivasan, V

    2012-01-01

    The electrochemical behavior of a promising hydrogen/bromine redox flow battery is investigated for grid-scale energy-storage application with some of the best redox-flow-battery performance results to date, including a peak power of 1.4 W/cm(2) and a 91% voltaic efficiency at 0.4 W/cm(2) constant-power operation. The kinetics of bromine on various materials is discussed, with both rotating-disk-electrode and cell studies demonstrating that a carbon porous electrode for the bromine reaction can conduct platinum-comparable performance as long as sufficient surface area is realized. The effect of flow-cell designs and operating temperature is examined, and ohmic and mass-transfer losses are decreased by utilizing a flow-through electrode design and increasing cell temperature. Charge/discharge and discharge-rate tests also reveal that this system has highly reversible behavior and good rate capability. (C) 2012 The Electrochemical Society. [DOI: 10.1149/2.018211jes] All rights reserved.

  20. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  1. Low-rank matrix approximation with manifold regularization.

    Science.gov (United States)

    Zhang, Zhenyue; Zhao, Keke

    2013-07-01

    This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.

  2. Circadian analysis of large human populations: inferences from the power grid.

    Science.gov (United States)

    Stowie, Adam C; Amicarelli, Mario J; Crosier, Caitlin J; Mymko, Ryan; Glass, J David

    2015-03-01

    Few, if any studies have focused on the daily rhythmic nature of modern industrialized populations. The present study utilized real-time load data from the U.S. Pacific Northwest electrical power grid as a reflection of human operative household activity. This approach involved actigraphic analyses of continuously streaming internet data (provided in 5 min bins) from a human subject pool of approximately 43 million primarily residential users. Rhythm analyses reveal striking seasonal and intra-week differences in human activity patterns, largely devoid of manufacturing and automated load interference. Length of the diurnal activity period (alpha) is longer during the spring than the summer (16.64 h versus 15.98 h, respectively; p job-related or other weekday morning arousal cues, substantiating a preference or need to sleep longer on weekends. Finally, a shift in onset time can be seen during the transition to Day Light Saving Time, but not the transition back to Standard Time. The use of grid power load as a means for human actimetry assessment thus offers new insights into the collective diurnal activity patterns of large human populations.

  3. Techno-economic Modeling of the Integration of 20% Wind and Large-scale Energy Storage in ERCOT by 2030

    Energy Technology Data Exchange (ETDEWEB)

    Baldick, Ross; Webber, Michael; King, Carey; Garrison, Jared; Cohen, Stuart; Lee, Duehee

    2012-12-21

    This study's objective is to examine interrelated technical and economic avenues for the Electric Reliability Council of Texas (ERCOT) grid to incorporate up to and over 20% wind generation by 2030. Our specific interests are to look at the factors that will affect the implementation of both high level of wind power penetration (> 20% generation) and installation of large scale storage.

  4. Demagnifying electron projection with grid masks

    International Nuclear Information System (INIS)

    Politycki, A.; Meyer, A.

    1978-01-01

    Tightly toleranced micro- and submicrostructures with smooth edges were realized by using transmission masks with an improved supporting grid (width of traverses 0.8 μm). Local edge shift due to the proximity effect is kept at a minimum. Supporting grids with stil narrower traverses (0.5 μm) were prepared by generating the grid pattern by electron beam writing. Masks of this kind allow projection at a demagnification ratio of 1:4, resulting in large image fields. (orig.) [de

  5. Scaling Up Renewable Energy Generation: Aligning Targets and Incentives with Grid Integration Considerations, Greening The Grid

    Energy Technology Data Exchange (ETDEWEB)

    Katz, Jessica; Cochran, Jaquelin

    2015-05-27

    Greening the Grid provides technical assistance to energy system planners, regulators, and grid operators to overcome challenges associated with integrating variable renewable energy into the grid. This document, part of a Greening the Grid toolkit, provides power system planners with tips to help secure and sustain investment in new renewable energy generation by aligning renewable energy policy targets and incentives with grid integration considerations.

  6. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  7. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  8. Soft X-ray Emission from Large-Scale Galactic Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S.; O'Dea, C.; Veilleux, S.

    1998-01-01

    Kiloparsec-scale soft X-ray nebulae extend along the galaxy minor axes in several Seyfert galaxies, including NGC 2992, NGC 4388 and NGC 5506. In these three galaxies, the extended X-ray emission observed in ROSAT HRI images has 0.2-2.4 keV X-ray luminosities of 0.4-3.5 x 10(40) erg s(-1) . The X-ray nebulae are roughly co-spatial with the large-scale radio emission, suggesting that both are produced by large-scale galactic outflows. Assuming pressure balance between the radio and X-ray plasmas, the X-ray filling factor is >~ 10(4) times as large as the radio plasma filling factor, suggesting that large-scale outflows in Seyfert galaxies are predominantly winds of thermal X-ray emitting gas. We favor an interpretation in which large-scale outflows originate as AGN-driven jets that entrain and heat gas on kpc scales as they make their way out of the galaxy. AGN- and starburst-driven winds are also possible explanations if the winds are oriented along the rotation axis of the galaxy disk. Since large-scale outflows are present in at least 50 percent of Seyfert galaxies, the soft X-ray emission from the outflowing gas may, in many cases, explain the ``soft excess" X-ray feature observed below 2 keV in X-ray spectra of many Seyfert 2 galaxies.

  9. Decision tree ensembles for online operation of large smart grids

    International Nuclear Information System (INIS)

    Steer, Kent C.B.; Wirth, Andrew; Halgamuge, Saman K.

    2012-01-01

    Highlights: ► We present a new technique for the online control of large smart grids. ► We use a Decision Tree Ensemble in a Receding Horizon Controller. ► Decision Trees can approximate online optimisation approaches. ► Decision Trees can make adjustments to their output in real time. ► The new technique outperforms heuristic online optimisation approaches. - Abstract: Smart grids utilise omnidirectional data transfer to operate a network of energy resources. Associated technologies present operators with greater control over system elements and more detailed information on the system state. While these features may improve the theoretical optimal operating performance, determining the optimal operating strategy becomes more difficult. In this paper, we show how a decision tree ensemble or ‘forest’ can produce a near-optimal control strategy in real time. The approach substitutes the decision forest for the simulation–optimisation sub-routine commonly employed in receding horizon controllers. The method is demonstrated on a small and a large network, and compared to controllers employing particle swarm optimisation and evolutionary strategies. For the smaller network the proposed method performs comparably in terms of total energy usage, but delivers a greater demand deficit. On the larger network the proposed method is superior with respect to all measures. We conclude that the method is useful when the time required to evaluate possible strategies via simulation is high.

  10. Regularized Discriminant Analysis: A Large Dimensional Study

    KAUST Repository

    Yang, Xiaoke

    2018-04-28

    In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).

  11. Active self-testing noise measurement sensors for large-scale environmental sensor networks.

    Science.gov (United States)

    Domínguez, Federico; Cuong, Nguyen The; Reinoso, Felipe; Touhafi, Abdellah; Steenhaut, Kris

    2013-12-13

    Large-scale noise pollution sensor networks consist of hundreds of spatially distributed microphones that measure environmental noise. These networks provide historical and real-time environmental data to citizens and decision makers and are therefore a key technology to steer environmental policy. However, the high cost of certified environmental microphone sensors render large-scale environmental networks prohibitively expensive. Several environmental network projects have started using off-the-shelf low-cost microphone sensors to reduce their costs, but these sensors have higher failure rates and produce lower quality data. To offset this disadvantage, we developed a low-cost noise sensor that actively checks its condition and indirectly the integrity of the data it produces. The main design concept is to embed a 13 mm speaker in the noise sensor casing and, by regularly scheduling a frequency sweep, estimate the evolution of the microphone's frequency response over time. This paper presents our noise sensor's hardware and software design together with the results of a test deployment in a large-scale environmental network in Belgium. Our middle-range-value sensor (around €50) effectively detected all experienced malfunctions, in laboratory tests and outdoor deployments, with a few false positives. Future improvements could further lower the cost of our sensor below €10.

  12. A SUB-GRID VOLUME-OF-FLUIDS (VOF) MODEL FOR MIXING IN RESOLVED SCALE AND IN UNRESOLVED SCALE COMPUTATIONS

    International Nuclear Information System (INIS)

    Vold, Erik L.; Scannapieco, Tony J.

    2007-01-01

    A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.

  13. Carbon footprint reductions via grid energy storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Hale, Trevor S. [Naval Facilities Engineering Service Center, 1100 23rd Avenue, Port Huenem, CA 93043 (United States); Department of Management, Marketing, and Business Administration, University of Houston - Downtown, Houston, Texas (United States); Weeks, Kelly [Department of Maritime Administration, Texas A and M University at Galveston, Galveston, TX 77553 (United States); Tucker, Coleman [Department of Management, Marketing, and Business Administration, University of Houston - Downtown, Houston, Texas 77002 (United States)

    2011-07-01

    This effort presents a framework for reducing carbon emissions through the use of large-scale grid-energy-storage (GES) systems. The specific questions under investigation herein are as follows: Is it economically sound to invest in a GES system and is the system at least carbon footprint neutral? This research will show the answer to both questions is in the affirmative. Scilicet, when utilized judiciously, grid energy storage systems can be both net present value positive as well as be total carbon footprint negative. The significant contribution herein is a necessary and sufficient condition for achieving carbon footprint reductions via grid energy storage systems.

  14. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    Science.gov (United States)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2011-08-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the

  15. Local Fitting of the Kohn-Sham Density in a Gaussian and Plane Waves Scheme for Large-Scale Density Functional Theory Simulations.

    Science.gov (United States)

    Golze, Dorothea; Iannuzzi, Marcella; Hutter, Jürg

    2017-05-09

    A local resolution-of-the-identity (LRI) approach is introduced in combination with the Gaussian and plane waves (GPW) scheme to enable large-scale Kohn-Sham density functional theory calculations. In GPW, the computational bottleneck is typically the description of the total charge density on real-space grids. Introducing the LRI approximation, the linear scaling of the GPW approach with respect to system size is retained, while the prefactor for the grid operations is reduced. The density fitting is an O(N) scaling process implemented by approximating the atomic pair densities by an expansion in one-center fit functions. The computational cost for the grid-based operations becomes negligible in LRIGPW. The self-consistent field iteration is up to 30 times faster for periodic systems dependent on the symmetry of the simulation cell and on the density of grid points. However, due to the overhead introduced by the local density fitting, single point calculations and complete molecular dynamics steps, including the calculation of the forces, are effectively accelerated by up to a factor of ∼10. The accuracy of LRIGPW is assessed for different systems and properties, showing that total energies, reaction energies, intramolecular and intermolecular structure parameters are well reproduced. LRIGPW yields also high quality results for extended condensed phase systems such as liquid water, ice XV, and molecular crystals.

  16. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Alexandrov; Kotov, V.; Mineev, M.; Roumiantsev, V.; Wolters, H.; Amorim, A.; Pedro, L.; Ribeiro, A.; Badescu, E.; Caprini, M.; Burckhart-Chromek, D.; Dobson, M.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Nassiakou, M.; Schweiger, D.; Soloviev, I.; Hart, R.; Ryabov, Y.; Moneta, L.

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  17. A Development of Lightweight Grid Interface

    International Nuclear Information System (INIS)

    Iwai, G; Kawai, Y; Sasaki, T; Watase, Y

    2011-01-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  18. Wingridder - an interactive grid generator for TOUGH2

    International Nuclear Information System (INIS)

    Pan, Lehua

    2003-01-01

    The TOUGH (Transport Of Unsaturated Groundwater and Heat) family of codes has great flexibility in handling the variety of grid information required to describe a complex subsurface system. However, designing and generating such a grid can be a tedious and error-prone process. This is especially true when the number of cells and connections is very large. As a user-friendly, efficient, and effective grid generating software, WinGridder has been developed for designing, generating, and visualizing (at various spatial scales) numerical grids used in reservoir simulations and groundwater modeling studies. It can save mesh files for TOUGH family codes. It can also output additional grid information for various purposes in either graphic format or plain text format. It has user-friendly graphical user interfaces, along with an easy-to-use interactive design and plot tools. Many important features, such as inclined faults and offset, layering structure, local refinements, and embedded engineering structures, can be represented in the grid

  19. McRunjob: A High Energy Physics Workflow Planner for Grid Production Processing

    CERN Document Server

    Graham, G E; Bertram, I; Graham, Gregory E.; Evans, Dave; Bertram, Iain

    2003-01-01

    McRunjob is a powerful grid workflow manager used to manage the generation of large numbers of production processing jobs in High Energy Physics. In use at both the DZero and CMS experiments, McRunjob has been used to manage large Monte Carlo production processing since 1999 and is being extended to uses in regular production processing for analysis and reconstruction. Described at CHEP 2001, McRunjob converts core metadata into jobs submittable in a variety of environments. The powerful core metadata description language includes methods for converting the metadata into persistent forms, job descriptions, multi-step workflows, and data provenance information. The language features allow for structure in the metadata by including full expressions, namespaces, functional dependencies, site specific parameters in a grid environment, and ontological definitions. It also has simple control structures for parallelization of large jobs. McRunjob features a modular design which allows for easy expansion to new job d...

  20. Autonomous Energy Grids: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Kroposki, Benjamin D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bernstein, Andrey [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-04

    With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performance while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.

  1. Building Grid applications using Web Services

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    There has been a lot of discussion within the Grid community about the use of Web Services technologies in building large-scale, loosely-coupled, cross-organisation applications. In this talk we are going to explore the principles that govern Service-Oriented Architectures and the promise of Web Services technologies for integrating applications that span administrative domains. We are going to see how existing Web Services specifications and practices could provide the necessary infrastructure for implementing Grid applications. Biography Dr. Savas Parastatidis is a Principal Research Associate at the School of Computing Science, University of Newcastle upon Tyne, UK. Savas is one of the authors of the "Grid Application Framework based on Web Services Specifications and Practices" document that was influential in the convergence between Grid and Web Services and the move away from OGSI (more information can be found at http://www.neresc.ac.uk/ws-gaf). He has done research on runtime support for distributed-m...

  2. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    Energy Technology Data Exchange (ETDEWEB)

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo (Norway)

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.

  3. Distributed computing grid experiences in CMS

    CERN Document Server

    Andreeva, Julia; Barrass, T; Bonacorsi, D; Bunn, Julian; Capiluppi, P; Corvo, M; Darmenov, N; De Filippis, N; Donno, F; Donvito, G; Eulisse, G; Fanfani, A; Fanzago, F; Filine, A; Grandi, C; Hernández, J M; Innocente, V; Jan, A; Lacaprara, S; Legrand, I; Metson, S; Newbold, D; Newman, H; Pierro, A; Silvestris, L; Steenberg, C; Stockinger, H; Taylor, Lucas; Thomas, M; Tuura, L; Van Lingen, F; Wildish, Tony

    2005-01-01

    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data- taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure ...

  4. Foundational perspectives on causality in large-scale brain networks

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain.

  5. Application of Load Compensation in Voltage Controllers of Large Generators in the Polish Power Grid

    Directory of Open Access Journals (Sweden)

    Bogdan Sobczak

    2014-03-01

    Full Text Available The Automatic Voltage Regulator normally controls the generator stator terminal voltage. Load compensation is used to control the voltage which is representative of the voltage at a point either within or external to the generator. In the Polish Power Grid (PPG compensation is ready to use in every AVR of a large generator, but it is utilized only in the case of generators operating at the same medium voltage buses. It is similar as in most European Power Grids. The compensator regulating the voltage at a point beyond the machine terminals has significant advantages in comparison to the slower secondary Voltage and Reactive Power Control System (ARNE1. The compensation stiffens the EHV grid, which leads to improved voltage quality in the distribution grid. This effect may be particularly important in the context of the dynamic development of wind and solar energy.

  6. Performance of R-GMA based grid job monitoring system for CMS data production

    CERN Document Server

    Byrom, Robert; Fisher, Steve M; Grandi, Claudio; Hobson, Peter R; Kyberd, Paul; MacEvoy, Barry; Nebrensky, Jindrich Josef; Tallini, Hugh; Traylen, Stephen

    2004-01-01

    High Energy Physics experiments, such as the Compact Muon Solenoid (CMS) at the CERN laboratory in Geneva, have large-scale data processing requirements, with stored data accumulating at a rate of 1 Gbyte/s. This load comfortably exceeds any previous processing requirements and we believe it may be most efficiently satisfied through Grid computing. Management of large Monte Carlo productions (~3000 jobs) or data analyses and the quality assurance of the results requires careful monitoring and bookkeeping, and an important requirement when using the Grid is the ability to monitor transparently the large number of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. We have previously developed a system allowing us to test its performance under a heavy load while using few real Grid resources. We present the latest results on this system and comp...

  7. Expected Utility and Entropy-Based Decision-Making Model for Large Consumers in the Smart Grid

    Directory of Open Access Journals (Sweden)

    Bingtuan Gao

    2015-09-01

    Full Text Available In the smart grid, large consumers can procure electricity energy from various power sources to meet their load demands. To maximize its profit, each large consumer needs to decide their energy procurement strategy under risks such as price fluctuations from the spot market and power quality issues. In this paper, an electric energy procurement decision-making model is studied for large consumers who can obtain their electric energy from the spot market, generation companies under bilateral contracts, the options market and self-production facilities in the smart grid. Considering the effect of unqualified electric energy, the profit model of large consumers is formulated. In order to measure the risks from the price fluctuations and power quality, the expected utility and entropy is employed. Consequently, the expected utility and entropy decision-making model is presented, which helps large consumers to minimize their expected profit of electricity procurement while properly limiting the volatility of this cost. Finally, a case study verifies the feasibility and effectiveness of the proposed model.

  8. A Scalable proxy cache for Grid Data Access

    International Nuclear Information System (INIS)

    Cristian Cirstea, Traian; Just Keijser, Jan; Arthur Koeroo, Oscar; Starink, Ronald; Alan Templon, Jeffrey

    2012-01-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  9. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  10. Large scale solvothermal synthesis and a strategy to obtain stable Langmuir–Blodgett film of CoFe{sub 2}O{sub 4} nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Thampi, Arya; Babu, Keerthi; Verma, Seema, E-mail: sa.verma@iiserpune.ac.in

    2013-07-05

    Highlights: • Large scale, monodisperse CoFe{sub 2}O{sub 4} nanoparticles by solvothermal route. • LB technique to obtain stable film of CoFe{sub 2}O{sub 4} nanoparticles over a large area. • Hydrophobicity of substrate was enhanced utilizing LB films of cadmium arachidate. • P–A isotherm and AFM cross sectional height profile analysis confirms stability. • Large scale organization of nanoparticles for surface pressure higher than 15 mN/m. -- Abstract: Nearly monodisperse oleic acid coated cobalt ferrite nanoparticles were synthesized in large scale by a simple solvothermal method utilizing N-methyl 2-Pyrrolidone (NMP) as a high boiling solvent. The magnetic oxide was further investigated by X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), transmission electron microscopy (TEM), high resolution transmission electron microscopy (HRTEM) and vibrating sample magnetometer (VSM). Langmuir–Blodgett (LB) technique is discussed to obtain a 2D assembly of oleic acid coated CoFe{sub 2}O{sub 4} nanoparticles over a large area. We describe a method to obtain stable, condensed three layers of cadmium arachidate on a piranha treated glass substrate. The hydrophobic surface thus obtained was subsequently used for forming a stable monolayer of oleic acid stabilized cobalt ferrite nanoparticles at the air–water interface. The stability of the LB films at the air–water interface was studied by pressure–area isotherm curves and atomic force microscopy (AFM) cross sectional height profile analysis. 2D organization of the magnetic nanoparticles at different surface pressures was studied by TEM. Preparation of large area LB films of CoFe{sub 2}O{sub 4} nanoparticles is reported for a surface pressure more than 15 mN/m.

  11. Mass-flux subgrid-scale parameterization in analogy with multi-component flows: a formulation towards scale independence

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2012-11-01

    Full Text Available A generalized mass-flux formulation is presented, which no longer takes a limit of vanishing fractional areas for subgrid-scale components. The presented formulation is applicable to a~situation in which the scale separation is still satisfied, but fractional areas occupied by individual subgrid-scale components are no longer small. A self-consistent formulation is presented by generalizing the mass-flux formulation under the segmentally-constant approximation (SCA to the grid–scale variabilities. The present formulation is expected to alleviate problems arising from increasing resolutions of operational forecast models without invoking more extensive overhaul of parameterizations.

    The present formulation leads to an analogy of the large-scale atmospheric flow with multi-component flows. This analogy allows a generality of including any subgrid-scale variability into the mass-flux parameterization under SCA. Those include stratiform clouds as well as cold pools in the boundary layer.

    An important finding under the present formulation is that the subgrid-scale quantities are advected by the large-scale velocities characteristic of given subgrid-scale components (large-scale subcomponent flows, rather than by the total large-scale flows as simply defined by grid-box average. In this manner, each subgrid-scale component behaves as if like a component of multi-component flows. This formulation, as a result, ensures the lateral interaction of subgrid-scale variability crossing the grid boxes, which are missing in the current parameterizations based on vertical one-dimensional models, and leading to a reduction of the grid-size dependencies in its performance. It is shown that the large-scale subcomponent flows are driven by large-scale subcomponent pressure gradients. The formulation, as a result, furthermore includes a self-contained description of subgrid-scale momentum transport.

    The main purpose of the present paper

  12. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  13. Transcultural Adaptation of GRID Hamilton Rating Scale For Depression (GRID-HAMD) to Brazilian Portuguese and Evaluation of the Impact of Training Upon Inter-Rater Reliability.

    Science.gov (United States)

    Henrique-Araújo, Ricardo; Osório, Flávia L; Gonçalves Ribeiro, Mônica; Soares Monteiro, Ivandro; Williams, Janet B W; Kalali, Amir; Alexandre Crippa, José; Oliveira, Irismar Reis De

    2014-07-01

    GRID-HAMD is a semi-structured interview guide developed to overcome flaws in HAM-D, and has been incorporated into an increasing number of studies. Carry out the transcultural adaptation of GRID-HAMD into the Brazilian Portuguese language, evaluate the inter-rater reliability of this instrument and the training impact upon this measure, and verify the raters' opinions of said instrument. The transcultural adaptation was conducted by appropriate methodology. The measurement of inter-rater reliability was done by way of videos that were evaluated by 85 professionals before and after training for the use of this instrument. The intraclass correlation coefficient (ICC) remained between 0.76 and 0.90 for GRID-HAMD-21 and between 0.72 and 0.91 for GRID-HAMD-17. The training did not have an impact on the ICC, except for a few groups of participants with a lower level of experience. Most of the participants showed high acceptance of GRID-HAMD, when compared to other versions of HAM-D. The scale presented adequate inter-rater reliability even before training began. Training did not have an impact on this measure, except for a few groups with less experience. GRID-HAMD received favorable opinions from most of the participants.

  14. Control for large scale demand response of thermostatic loads

    DEFF Research Database (Denmark)

    Totu, Luminita Cristiana; Leth, John; Wisniewski, Rafal

    2013-01-01

    appliances with on/off operation. The objective is to reduce the consumption peak of a group of loads composed of both flexible and inflexible units. The power flexible units are the thermostat-based appliances. We discuss a centralized, model predictive approach and a distributed structure with a randomized......Demand response is an important Smart Grid concept that aims at facilitating the integration of volatile energy resources into the electricity grid. This paper considers a residential demand response scenario and specifically looks into the problem of managing a large number thermostatbased...

  15. Synthesizing large-scale pyroclastic flows: Experimental design, scaling, and first results from PELE

    Science.gov (United States)

    Lube, G.; Breard, E. C. P.; Cronin, S. J.; Jones, J.

    2015-03-01

    Pyroclastic flow eruption large-scale experiment (PELE) is a large-scale facility for experimental studies of pyroclastic density currents (PDCs). It is used to generate high-energy currents involving 500-6500 m3 natural volcanic material and air that achieve velocities of 7-30 m s-1, flow thicknesses of 2-4.5 m, and runouts of >35 m. The experimental PDCs are synthesized by a controlled "eruption column collapse" of ash-lapilli suspensions onto an instrumented channel. The first set of experiments are documented here and used to elucidate the main flow regimes that influence PDC dynamic structure. Four phases are identified: (1) mixture acceleration during eruption column collapse, (2) column-slope impact, (3) PDC generation, and (4) ash cloud diffusion. The currents produced are fully turbulent flows and scale well to natural PDCs including small to large scales of turbulent transport. PELE is capable of generating short, pulsed, and sustained currents over periods of several tens of seconds, and dilute surge-like PDCs through to highly concentrated pyroclastic flow-like currents. The surge-like variants develop a basal <0.05 m thick regime of saltating/rolling particles and shifting sand waves, capped by a 2.5-4.5 m thick, turbulent suspension that grades upward to lower particle concentrations. Resulting deposits include stratified dunes, wavy and planar laminated beds, and thin ash cloud fall layers. Concentrated currents segregate into a dense basal underflow of <0.6 m thickness that remains aerated. This is capped by an upper ash cloud surge (1.5-3 m thick) with 100 to 10-4 vol % particles. Their deposits include stratified, massive, normally and reversely graded beds, lobate fronts, and laterally extensive veneer facies beyond channel margins.

  16. Wide-Scale Adoption of Photovoltaic Energy

    DEFF Research Database (Denmark)

    Yang, Yongheng; Enjeti, Prasad; Blaabjerg, Frede

    2015-01-01

    Current grid standards largely require that low-power (e.g., several kilowatts) single-phase photovoltaic (PV) systems operate at unity power factor (PF) with maximum power point tracking (MPPT), and disconnect from the grid under grid faults by means of islanding detection. However, in the case...... of wide-scale penetration of single-phase PV systems in the distributed grid, disconnection under grid faults can contribute to 1) voltage flickers, 2) power outages, and 3) system instability. This article explores grid code modifications for a wide-scale adoption of PV systems in the distribution grid....... In addition, based on the fact that Italy and Japan have recently undertaken a major review of standards for PV power conversion systems connected to low-voltage networks, the importance of low voltage ride-through (LVRT) for single-phase PV power systems under grid faults is considered, along with three...

  17. RACORO continental boundary layer cloud investigations: 1. Case study development and ensemble large-scale forcings

    Science.gov (United States)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  18. RACORO Continental Boundary Layer Cloud Investigations: 1. Case Study Development and Ensemble Large-Scale Forcings

    Science.gov (United States)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; hide

    2015-01-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, kappa, are derived from observations to be approximately 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary

  19. Modeling and power system stability of VSC-HVDC systems for grid-connection of large offshore windfarms

    Energy Technology Data Exchange (ETDEWEB)

    Xue Yijing [Vestas China, Beijing (China); Akhmatov, Vladislav [Technical Univ. of Denmark, Lyngby (Denmark). Centre for Electric Technology

    2009-07-01

    Utilization of Voltage Source Converter (VSC) - High Voltage Direct Current (HVDC) systems for grid-connection of large offshore windfarms becomes relevant as installed power capacities as well as distances to the connection points of on-land transmission systems increase. At the same time, the grid code requirements of the Transmission System Operators (TSO), including ancillary system services and Low-Voltage Fault-Ride-Through (LVFRT) capability of large offshore windfarms, become more demanding. This paper presents a general-level model of and a LVFRT solution for a VSC-HVDC system for grid-connection of large offshore windfarms. The VSC-HVDC model is implemented using a general approach of independent control of active and reactive power in normal operations. The on-land VSC inverter, i.e. a grid-side inverter, provides voltage support to the transmission system and comprises a LVFRT solution in short-circuit faults. The presented model, LVFRT solution and impact on the system stability are investigated as a case study of a 1,000 MW offshore windfarm grid-connected through a VSC-HVDC system. The investigation is carried out on a model of the west Danish, with some elements of the north German, 400 kV, 220 kV and 150 kV transmission systems stage 2005-2006 using the DIgSILENT PowerFactory simulation program. In the investigation, a thermal power plant just south to the Danish border has been substituted by this 1,000 MW offshore windfarm utilizing the VSC-HVDC system. The investigation has shown that the substitution of a thermal power plant by a VSC-HVDC connected offshore windfarm should not have any negative impact on the short-term stability of the west Danish transmission system. The investigation should be repeated applying updated system model stages and offshore wind power commissioning schedules in the North and Baltic Seas. (orig.)

  20. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    Science.gov (United States)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  1. Grid computing the European Data Grid Project

    CERN Document Server

    Segal, B; Gagliardi, F; Carminati, F

    2000-01-01

    The goal of this project is the development of a novel environment to support globally distributed scientific exploration involving multi- PetaByte datasets. The project will devise and develop middleware solutions and testbeds capable of scaling to handle many PetaBytes of distributed data, tens of thousands of resources (processors, disks, etc.), and thousands of simultaneous users. The scale of the problem and the distribution of the resources and user community preclude straightforward replication of the data at different sites, while the aim of providing a general purpose application environment precludes distributing the data using static policies. We will construct this environment by combining and extending newly emerging "Grid" technologies to manage large distributed datasets in addition to computational elements. A consequence of this project will be the emergence of fundamental new modes of scientific exploration, as access to fundamental scientific data is no longer constrained to the producer of...

  2. The social costs and benefits of Smart Grids; Maatschappelijke kosten en baten van Intelligente Netten

    Energy Technology Data Exchange (ETDEWEB)

    Blom, M.J.; Bles, M.; Leguijt, C.; Rooijers, F.J. [CE Delft, Delft (Netherlands); Van Gerwen, R.; Van Hameren, D.; Verheij, F. [KEMA, Arnhem (Netherlands)

    2012-01-15

    Although a reliable social cost-benefit analysis (SCBA) of large-scale introduction of Smart Grids is currently lacking, the rough-and-ready estimates available give the impression that the results of such an exercise would be positive. In the latter half of 2011 and in early 2012 a number of large-scale trials to explore Smart Grids were initiated in the Netherlands: the so-called 'Pilots Smart Grids'. To identify and quantify the social costs and benefits associated with Smart Grids and to assess whether large-scale introduction is indeed desirable, an SCBA needed to be conducted. An additional question concerned the issue to be considered when rolling out large-scale trials aimed at improving the economic impact of Smart Grids in the Netherlands. The key question posed by the commissioning party, the Dutch Ministry of Economic Affairs, Agriculture and Innovation, was to identify and quantify the costs and benefits, both direct and indirect, of a national roll-out of Smart Grids. The present report deals with Phase 1 of the SCBA. [Dutch] De aanleg van intelligente elektriciteitsnetten - ook wel smart grids genoemd - levert een positieve bijdrage aan de toekomstige energievoorziening. Het leidt tot lagere prijzen voor consumenten en bedrijfsleven. Dit wordt geconcludeerd uit een maatschappelijke kosten-batenanalyse (MKBA) die door CE Delft en KEMA is verricht in opdracht van het ministerie van Economische Zaken, Landbouw en Innovatie. In de analyse zijn verschillende energiescenario's tot 2050 doorgerekend. In vrijwel elk toekomstscenario leveren smart grids een positieve bijdrage; of het nu gaat om wel of geen klimaatbeleid, een groot of klein aandeel decentrale energieproductie en andere variaties. Dat komt vooral door de te verwachten gedragsverandering van gebruikers als gevolg van variabele energietarieven en kostenbesparing in de netaanleg en elektriciteitsproductie. Uit de recent gestarte proeftuinen voor Intelligente Netten zal duidelijk

  3. OGC and Grid Interoperability in enviroGRIDS Project

    Science.gov (United States)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and

  4. Large-acceptance-angle gridded analyzers in an axial magnetic field

    International Nuclear Information System (INIS)

    Molvik, A.W.

    1981-01-01

    Electrostatic retarding-potential gridded analyzers have been used to measure the current and the axial energy distributions of ions escaping along magnetic field lines in the 2XIIB magnetic mirror fusion experiment at Lawerence Livermore National Laboratory (LLNL). Three analyzers are discussed: a large scanning analyzer with a movable entrance aperture that can measure ion or electron losses from a different segment of the plasma diameter on each shot, a smaller analyzer that mounts in 5-cm-diam ports, and a multicollector analyzer that can continuously measure losses from the entire plasma diameter

  5. ARC Cache: A solution for lightweight Grid sites in ATLAS

    CERN Document Server

    Garonne, Vincent; The ATLAS collaboration

    2016-01-01

    Many Grid sites have the need to reduce operational manpower, and running a storage element consumes a large amount of effort. In addition, setting up a new Grid site including a storage element involves a steep learning curve and large investment of time. For these reasons so-called storage-less sites are becoming more popular as a way to provide Grid computing resources with less operational overhead. ARC CE is a widely-used and mature Grid middleware which was designed from the start to be used on sites with no persistent storage element. Instead, it maintains a local self-managing cache of data which retains popular data for future jobs. As the cache is simply an area on a local posix shared filesystem with no external-facing service, it requires no extra maintenance. The cache can be scaled up as required by increasing the size of the filesystem or adding new filesystems. This paper describes how ARC CE and its cache are an ideal solution for lightweight Grid sites in the ATLAS experiment, and the integr...

  6. Formation of Virtual Organizations in Grids: A Game-Theoretic Approach

    Science.gov (United States)

    Carroll, Thomas E.; Grosu, Daniel

    The execution of large scale grid applications requires the use of several computational resources owned by various Grid Service Providers (GSPs). GSPs must form Virtual Organizations (VOs) to be able to provide the composite resource to these applications. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. We formulate the resource composition among GSPs as a coalition formation problem and propose a game-theoretic framework based on cooperation structures to model it. Using this framework, we design a resource management system that supports the VO formation among GSPs in a grid computing system.

  7. Impacts of large-scale offshore wind farm integration on power systems through VSC-HVDC

    DEFF Research Database (Denmark)

    Liu, Hongzhi; Chen, Zhe

    2013-01-01

    The potential of offshore wind energy has been commonly recognized and explored globally. Many countries have implemented and planned offshore wind farms to meet their increasing electricity demands and public environmental appeals, especially in Europe. With relatively less space limitation......, an offshore wind farm could have a capacity rating to hundreds of MWs or even GWs that is large enough to compete with conventional power plants. Thus the impacts of a large offshore wind farm on power system operation and security should be thoroughly studied and understood. This paper investigates...... the impacts of integrating a large-scale offshore wind farm into the transmission system of a power grid through VSC-HVDC connection. The concerns are focused on steady-state voltage stability, dynamic voltage stability and transient angle stability. Simulation results based on an exemplary power system...

  8. Towards Decentralization : A Topological Investigation of the Medium and Low Voltage Grids

    NARCIS (Netherlands)

    Pagani, Giuliano Andrea; Aiello, Marco

    2011-01-01

    The traditional power grid has been designed in a hierarchical fashion, with energy pushed from the large scale production factories towards the end users. With the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce

  9. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  10. Structural Emergency Control for Power Grids

    DEFF Research Database (Denmark)

    Vu, Thanh Long; Chatzivasileiadis, Spyros; Chiang, Hsiao-Dong

    2017-01-01

    In this paper, we introduce a structural emergency control to render post-fault dynamics of power systems from the critical fault-cleared state to a stable equilibrium point (EP). Theoretically, this is a new control paradigm that does not rely on any continuous measurement or load shedding......, as in the classical setup. Instead, the grid is made stable by intentionally changing the power network structure, and thereby, discretely relocating the EP and its stability region such that the system is consecutively driven from fault-cleared state through a set of EPs to the desired EP. The proposed control...... is designed by solving convex optimization problems, making it possibly scalable to large-scale power grids. In the practical side, the proposed control can be implemented by exploiting the FACTS devices that will be widely available on the grids, and hence, requiring minor investment....

  11. A regularized vortex-particle mesh method for large eddy simulation

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Walther, Jens Honore; Hejlesen, Mads Mølholm

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green’s function...... solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy...

  12. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  13. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site...

  14. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration; Medrano Llamas, R; Sciacca, G; Van der Ster, D C

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes more than 80 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate si...

  15. Next generation molten NaI batteries for grid scale energy storage

    Science.gov (United States)

    Small, Leo J.; Eccleston, Alexis; Lamb, Joshua; Read, Andrew C.; Robins, Matthew; Meaders, Thomas; Ingersoll, David; Clem, Paul G.; Bhavaraju, Sai; Spoerke, Erik D.

    2017-08-01

    Robust, safe, and reliable grid-scale energy storage continues to be a priority for improved energy surety, expanded integration of renewable energy, and greater system agility required to meet modern dynamic and evolving electrical energy demands. We describe here a new sodium-based battery based on a molten sodium anode, a sodium iodide/aluminum chloride (NaI/AlCl3) cathode, and a high conductivity NaSICON (Na1+xZr2SixP3-xO12) ceramic separator. This NaI battery operates at intermediate temperatures (120-180 °C) and boasts an energy density of >150 Wh kg-1. The energy-dense NaI-AlCl3 ionic liquid catholyte avoids lifetime-limiting plating and intercalation reactions, and the use of earth-abundant elements minimizes materials costs and eliminates economic uncertainties associated with lithium metal. Moreover, the inherent safety of this system under internal mechanical failure is characterized by negligible heat or gas production and benign reaction products (Al, NaCl). Scalability in design is exemplified through evolution from 0.85 to 10 Ah (28 Wh) form factors, displaying lifetime average Coulombic efficiencies of 99.45% and energy efficiencies of 81.96% over dynamic testing lasting >3000 h. This demonstration promises a safe, cost-effective, and long-lifetime technology as an attractive candidate for grid scale storage.

  16. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    Science.gov (United States)

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  17. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  18. The cosmic web in CosmoGrid void regions

    NARCIS (Netherlands)

    Rieder, Steven; van de Weygaert, Rien; Cautun, Marius; Beygu, Burcu; Portegies Zwart, Simon

    2016-01-01

    We study the formation and evolution of the cosmic web, using the high-resolution CosmoGrid ΛCDM simulation. In particular, we investigate the evolution of the large-scale structure around void halo groups, and compare this to observations of the VGS-31 galaxy group, which consists of three

  19. GridPix detectors: Production and beam test results

    International Nuclear Information System (INIS)

    Koppert, W.J.C.; Bakel, N. van; Bilevych, Y.; Colas, P.; Desch, K.; Fransen, M.; Graaf, H. van der; Hartjes, F.; Hessey, N.P.; Kaminski, J.; Schmitz, J.; Schön, R.; Zappon, F.

    2013-01-01

    The innovative GridPix detector is a Time Projection Chamber (TPC) that is read out with a Timepix-1 pixel chip. By using wafer post-processing techniques an aluminium grid is placed on top of the chip. When operated, the electric field between the grid and the chip is sufficient to create electron induced avalanches which are detected by the pixels. The time-to-digital converter (TDC) records the drift time enabling the reconstruction of high precision 3D track segments. Recently GridPixes were produced on full wafer scale, to meet the demand for more reliable and cheaper devices in large quantities. In a recent beam test the contribution of both diffusion and time walk to the spatial and angular resolutions of a GridPix detector with a 1.2 mm drift gap are studied in detail. In addition long term tests show that in a significant fraction of the chips the protection layer successfully quenches discharges, preventing harm to the chip

  20. GridPix detectors: Production and beam test results

    Science.gov (United States)

    Koppert, W. J. C.; van Bakel, N.; Bilevych, Y.; Colas, P.; Desch, K.; Fransen, M.; van der Graaf, H.; Hartjes, F.; Hessey, N. P.; Kaminski, J.; Schmitz, J.; Schön, R.; Zappon, F.

    2013-12-01

    The innovative GridPix detector is a Time Projection Chamber (TPC) that is read out with a Timepix-1 pixel chip. By using wafer post-processing techniques an aluminium grid is placed on top of the chip. When operated, the electric field between the grid and the chip is sufficient to create electron induced avalanches which are detected by the pixels. The time-to-digital converter (TDC) records the drift time enabling the reconstruction of high precision 3D track segments. Recently GridPixes were produced on full wafer scale, to meet the demand for more reliable and cheaper devices in large quantities. In a recent beam test the contribution of both diffusion and time walk to the spatial and angular resolutions of a GridPix detector with a 1.2 mm drift gap are studied in detail. In addition long term tests show that in a significant fraction of the chips the protection layer successfully quenches discharges, preventing harm to the chip.

  1. How to build a high-performance compute cluster for the Grid

    CERN Document Server

    Reinefeld, A

    2001-01-01

    The success of large-scale multi-national projects like the forthcoming analysis of the LHC particle collision data at CERN relies to a great extent on the ability to efficiently utilize computing and data-storage resources at geographically distributed sites. Currently, much effort is spent on the design of Grid management software (Datagrid, Globus, etc.), while the effective integration of computing nodes has been largely neglected up to now. This is the focus of our work. We present a framework for a high- performance cluster that can be used as a reliable computing node in the Grid. We outline the cluster architecture, the management of distributed data and the seamless integration of the cluster into the Grid environment. (11 refs).

  2. Interpolation from Grid Lines: Linear, Transfinite and Weighted Method

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2017-01-01

    When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid l...

  3. Bi-functional TiO2 cemented Ag grid under layer for enhancing the photovoltaic performance of a large-area dye-sensitized solar cell

    International Nuclear Information System (INIS)

    Lan Zhang; Wu Jihuai; Lin Jianming; Huang, Miaoliang

    2012-01-01

    Graphical abstract: Enhanced photovoltaic performance of large-area DSSC with conductive grids in the photo and counter electrodes. Highlights: ► TiO 2 protected Ag grids is made for using as electrode in large-area DSSC. ► The electrode has high conductivity and low internal resistance. ► TiO 2 protected Ag grids electrode avoids iodine corrosion in electrolyte. ► The TiO 2 layer also play a blocking layer role. ► Above factors enhance the photovoltaic performance of large-area DSSC. - Abstract: A bi-functional TiO 2 cemented Ag grid under layer for enhanced the photovoltaic performance of a large-area dye-sensitized solar cell (DSSC) is prepared with a simple way. The conductive printing paste contains micro-sized Ag powders and nano-sized TiO 2 cementing agent. The conductive printing paste can be well cemented on the FTO glass and form high conductive grids with Ag powders sintered together by the nano-sized TiO 2 particles. The formed conductive grid is protected with a TiO 2 thin layer and TiO 2 sol treatment to avoid the iodine corrosion. The addition of the TiO 2 cemented conductive grid can decrease the internal resistance of the large-area dye-sensitized solar cell when it is prepared in the photo and counter electrodes. Furthermore, the protecting TiO 2 thin layer and the TiO 2 sol treatment can be done on the whole area of the large-area photo electrode to both play as the blocking under layer at the same time, which can also enhance the photovoltaic performance of the large-area dye-sensitized solar cell.

  4. The LHC Computing Grid in the starting blocks

    CERN Multimedia

    Danielle Amy Venton

    2010-01-01

    As the Large Hadron Collider ramps up operations and breaks world records, it is an exciting time for everyone at CERN. To get the computing perspective, the Bulletin this week caught up with Ian Bird, leader of the Worldwide LHC Computing Grid (WLCG). He is confident that everything is ready for the first data.   The metallic globe illustrating the Worldwide LHC Computing GRID (WLCG) in the CERN Computing Centre. The Worldwide LHC Computing Grid (WLCG) collaboration has been in place since 2001 and for the past several years it has continually run the workloads for the experiments as part of their preparations for LHC data taking. So far, the numerous and massive simulations of the full chain of reconstruction and analysis software could only be carried out using Monte Carlo simulated data. Now, for the first time, the system is starting to work with real data and with many simultaneous users accessing them from all around the world. “During the 2009 large-scale computing challenge (...

  5. Large-scale research in the Federal Republic of Germany. Pt. 4

    International Nuclear Information System (INIS)

    Mock, W.

    1986-01-01

    The name is misleading: in the biggest of 13 large-scale research institutions, the KFA Nuclear Research Centre Juelich, nuclear research is now only one sphere of activities among many, besides other areas of research such as computer science, materials, and environmental research. This change in the areas of main emphasis constitutes the successful attempt - or so it seems up to now - of a 'research dinosaur' to answer to the necessities of an altered 'research landscape'. (orig.) [de

  6. Large-scale runoff generation – parsimonious parameterisation using high-resolution topography

    Directory of Open Access Journals (Sweden)

    L. Gong

    2011-08-01

    Full Text Available World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm

  7. A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows

    Science.gov (United States)

    Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert

    1996-01-01

    The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.

  8. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 4. Fractals and the Large-Scale Structure in the Universe - Is the Cosmological Principle Valid? A K Mittal T R Seshadri. General Article Volume 7 Issue 4 April 2002 pp 39-47 ...

  9. Software Toolchain for Large-Scale RE-NFA Construction on FPGA

    Directory of Open Access Journals (Sweden)

    Yi-Hua E. Yang

    2009-01-01

    and O(n×m memory by our software. A large number of RE-NFAs are placed onto a two-dimensional staged pipeline, allowing scalability to thousands of RE-NFAs with linear area increase and little clock rate penalty due to scaling. On a PC with a 2 GHz Athlon64 processor and 2 GB memory, our prototype software constructs hundreds of RE-NFAs used by Snort in less than 10 seconds. We also designed a benchmark generator which can produce RE-NFAs with configurable pattern complexity parameters, including state count, state fan-in, loop-back and feed-forward distances. Several regular expressions with various complexities are used to test the performance of our RE-NFA construction software.

  10. THE MASS-LOSS RETURN FROM EVOLVED STARS TO THE LARGE MAGELLANIC CLOUD. IV. CONSTRUCTION AND VALIDATION OF A GRID OF MODELS FOR OXYGEN-RICH AGB STARS, RED SUPERGIANTS, AND EXTREME AGB STARS

    International Nuclear Information System (INIS)

    Sargent, Benjamin A.; Meixner, M.; Srinivasan, S.

    2011-01-01

    To measure the mass loss from dusty oxygen-rich (O-rich) evolved stars in the Large Magellanic Cloud (LMC), we have constructed a grid of models of spherically symmetric dust shells around stars with constant mass-loss rates using 2Dust. These models will constitute the O-rich model part of the 'Grid of Red supergiant and Asymptotic giant branch star ModelS' (GRAMS). This model grid explores four parameters-stellar effective temperature from 2100 K to 4700 K; luminosity from 10 3 to 10 6 L sun ; dust shell inner radii of 3, 7, 11, and 15 R star ; and 10.0 μm optical depth from 10 -4 to 26. From an initial grid of ∼1200 2Dust models, we create a larger grid of ∼69,000 models by scaling to cover the luminosity range required by the data. These models are available online to the public. The matching in color-magnitude diagrams and color-color diagrams to observed O-rich asymptotic giant branch (AGB) and red supergiant (RSG) candidate stars from the SAGE and SAGE-Spec LMC samples and a small sample of OH/IR stars is generally very good. The extreme AGB star candidates from SAGE are more consistent with carbon-rich (C-rich) than O-rich dust composition. Our model grid suggests lower limits to the mid-infrared colors of the dustiest AGB stars for which the chemistry could be O-rich. Finally, the fitting of GRAMS models to spectral energy distributions of sources fit by other studies provides additional verification of our grid and anticipates future, more expansive efforts.

  11. Smart Grid: Network simulator for smart grid test-bed

    International Nuclear Information System (INIS)

    Lai, L C; Ong, H S; Che, Y X; Do, N Q; Ong, X J

    2013-01-01

    Smart Grid become more popular, a smaller scale of smart grid test-bed is set up at UNITEN to investigate the performance and to find out future enhancement of smart grid in Malaysia. The fundamental requirement in this project is design a network with low delay, no packet drop and with high data rate. Different type of traffic has its own characteristic and is suitable for different type of network and requirement. However no one understands the natural of traffic in smart grid. This paper presents the comparison between different types of traffic to find out the most suitable traffic for the optimal network performance.

  12. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    Science.gov (United States)

    Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.

    2012-12-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  13. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    International Nuclear Information System (INIS)

    Andreeva, J; Dzhunov, I; Karavakis, E; Kokoszkiewicz, L; Nowotka, M; Saiz, P; Tuckett, D

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  14. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  15. Large-deviation properties of resilience of power grids

    International Nuclear Information System (INIS)

    Dewenter, Timo; Hartmann, Alexander K

    2015-01-01

    We study the distributions of the resilience of power flow models against transmission line failures via a so-called backup capacity. We consider three ensembles of random networks, and in addition, the topology of the British transmission power grid. The three ensembles are Erdős–Rényi random graphs, Erdős–Rényi random graphs with a fixed number of links, and spatial networks where the nodes are embedded in a two-dimensional plane. We numerically investigate the probability density functions (pdfs) down to the tails to gain insight into very resilient and very vulnerable networks. This is achieved via large-deviation techniques, which allow us to study very rare values that occur with probability densities below 10 −160 . We find that the right tail of the pdfs towards larger backup capacities follows an exponential with a strong curvature. This is confirmed by the rate function, which approaches a limiting curve for increasing network sizes. Very resilient networks are basically characterized by a small diameter and a large power sign ratio. In addition, networks can be made typically more resilient by adding more links. (paper)

  16. Job schedul in Grid batch farms

    International Nuclear Information System (INIS)

    Gellrich, Andreas

    2014-01-01

    We present here a study for a scheduler which cooperates with the queueing system TORQUE and is tailored to the needs of a HEP-dominated large Grid site with around 10000 jobs slots. Triggered by severe scaling problems of MAUI, a scheduler, referred to as MYSCHED, was developed and put into operation. We discuss conceptional aspects as well as experiences after almost two years of running.

  17. Transfrontier Macroseismic Data Exchange in Europe: Intensity Assessment of M>4 Earthquakes by a Grid Cell Approach

    Science.gov (United States)

    Van Noten, K.; Lecocq, T.; Sira, C.; Hinzen, K. G.; Camelbeeck, T.

    2016-12-01

    In the US, the USGS is the only institute that gathers macroseismic data through its online "Did You Feel It?" (DYFI) system allowing a homogeneous and consistent intensity assessment. In Europe, however, we face a much more complicated situation. As almost every nation has its own inquiry in their national language(s) and both the EMSC and the USGS run an international DYFI inquiry, responses to European transfrontier-felt seismic events are strongly fragmented across different institutes. To make a realistic ground motion intensity assessment, macroseismic databases need to be merged in a consistent way hereby dealing with duplicated responses, different intensity calculations and legal issues (observer's privacy). In this presentation, we merge macroseismic datasets by a grid cell approach. Instead of using the irregularly-shaped, arbitrary municipal boundaries, we structure the model area into (100 km2) grid cells and assign an intensity value to each grid cell based on all institutional (geocoded) responses in that cell. The resulting macroseismic grid cell distribution shows a less subjective and more homogeneous intensity distribution than the classic community distribution despite less datapoints are used after geocoding the participant's location. The method is demonstrated on the 2011 ML 4.3 (MW 3.7) Goch (Germany) and the 2015 ML 4.2 (MW 3.7) Ramsgate (UK) earthquakes both felt in NW Europe. Integration of data results in a non-circular distribution in which the felt area extends significantly more in E-W than in N-S direction, illustrating a low-pass filtering effect due to the south-to-north increasing thickness of cover sediments above the regional London-Brabant Massif. Ground motions were amplified and attenuated at places with a shallow and deep basement, respectively. To large extend, the shape of the attenuation model derived through the grid cell intensity points is rather similar as the Atkinson and Wald (2007) CEUS prediction. The attenuation

  18. The influence of control parameter estimation on large scale geomorphological interpretation of pointclouds

    Science.gov (United States)

    Dorninger, P.; Koma, Z.; Székely, B.

    2012-04-01

    In recent years, laser scanning, also referred to as LiDAR, has proved to be an important tool for topographic data acquisition. Basically, laser scanning acquires a more or less homogeneously distributed point cloud. These points represent all natural objects like terrain and vegetation as well as man-made objects such as buildings, streets, powerlines, or other constructions. Due to the enormous amount of data provided by current scanning systems capturing up to several hundred thousands of points per second, the immediate application of such point clouds for large scale interpretation and analysis is often prohibitive due to restrictions of the hard- and software infrastructure. To overcome this, numerous methods for the determination of derived products do exist. Commonly, Digital Terrain Models (DTM) or Digital Surface Models (DSM) are derived to represent the topography using a regular grid as datastructure. The obvious advantages are a significant reduction of the amount of data and the introduction of an implicit neighborhood topology enabling the application of efficient post processing methods. The major disadvantages are the loss of 3D information (i.e. overhangs) as well as the loss of information due to the interpolation approach used. We introduced a segmentation approach enabling the determination of planar structures within a given point cloud. It was originally developed for the purpose of building modeling but has proven to be well suited for large scale geomorphological analysis as well. The result is an assignment of the original points to a set of planes. Each plane is represented by its plane parameters. Additionally, numerous quality and quantity parameters are determined (e.g. aspect, slope, local roughness, etc.). In this contribution, we investigate the influence of the control parameters required for the plane segmentation on the geomorphological interpretation of the derived product. The respective control parameters may be determined

  19. Bio-Inspired Cyber Security for Smart Grid Deployments

    Energy Technology Data Exchange (ETDEWEB)

    McKinnon, Archibald D.; Thompson, Seth R.; Doroshchuk, Ruslan A.; Fink, Glenn A.; Fulp, Errin W.

    2013-05-01

    mart grid technologies are transforming the electric power grid into a grid with bi-directional flows of both power and information. Operating millions of new smart meters and smart appliances will significantly impact electric distribution systems resulting in greater efficiency. However, the scale of the grid and the new types of information transmitted will potentially introduce several security risks that cannot be addressed by traditional, centralized security techniques. We propose a new bio-inspired cyber security approach. Social insects, such as ants and bees, have developed complex-adaptive systems that emerge from the collective application of simple, light-weight behaviors. The Digital Ants framework is a bio-inspired framework that uses mobile light-weight agents. Sensors within the framework use digital pheromones to communicate with each other and to alert each other of possible cyber security issues. All communication and coordination is both localized and decentralized thereby allowing the framework to scale across the large numbers of devices that will exist in the smart grid. Furthermore, the sensors are light-weight and therefore suitable for implementation on devices with limited computational resources. This paper will provide a brief overview of the Digital Ants framework and then present results from test bed-based demonstrations that show that Digital Ants can identify a cyber attack scenario against smart meter deployments.

  20. Large-scale weather dynamics during the 2015 haze event in Singapore

    Science.gov (United States)

    Djamil, Yudha; Lee, Wen-Chien; Tien Dat, Pham; Kuwata, Mikinori

    2017-04-01

    The 2015 haze event in South East Asia is widely considered as a period of the worst air quality in the region in more than a decade. The source of the haze was from forest and peatland fire in Sumatra and Kalimantan Islands, Indonesia. The fires were mostly came from the practice of forest clearance known as slash and burn, to be converted to palm oil plantation. Such practice of clearance although occurs seasonally but at 2015 it became worst by the impact of strong El Nino. The long period of dryer atmosphere over the region due to El Nino makes the fire easier to ignite, spread and difficult to stop. The biomass emission from the forest and peatland fire caused large-scale haze pollution problem in both Islands and further spread into the neighboring countries such as Singapore and Malaysia. In Singapore, for about two months (September-October, 2015) the air quality was in the unhealthy level. Such unfortunate condition caused some socioeconomic losses such as school closure, cancellation of outdoor events, health issues and many more with total losses estimated as S700 million. The unhealthy level of Singapore's air quality is based on the increasing pollutant standard index (PSI>120) due to the haze arrival, it even reached a hazardous level (PSI= 300) for several days. PSI is a metric of air quality in Singapore that aggregate six pollutants (SO2, PM10, PM2.5, NO2, CO and O3). In this study, we focused on PSI variability in weekly-biweekly time scales (periodicity < 30 days) since it is the least understood compare to their diurnal and seasonal scales. We have identified three dominant time scales of PSI ( 5, 10 and 20 days) using Wavelet method and investigated their large-scale atmospheric structures. The PSI associated large-scale column moisture horizontal structures over the Indo-Pacific basin are dominated by easterly propagating gyres in synoptic (macro) scale for the 5 days ( 10 and 20 days) time scales. The propagating gyres manifest as cyclical

  1. Characterization of large volume CdZnTe detectors with a quad-grid structure for the COBRA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Rohatsch, Katja [TU Dresden, Institut fuer Kern- und Teilchenphysik, 01069 Dresden (Germany); Collaboration: COBRA-Collaboration

    2016-07-01

    The COBRA experiment uses room temperature semiconductor detectors made of Cadmium-Zinc-Telluride, which contains several double beta isotopes, to search for neutrinoless double beta-decay. To compensate for poor hole transport in CdZnTe the detectors are equipped with a coplanar grid (CPG) instead of a planar anode. Currently, a demonstrator setup consisting of 64 1 cm{sup 3} CPG-detectors is in operation at the LNGS in Italy to prove the concept and to determine the long-term stability of the detectors and the instrumentation. For a future large scale experiment it is planned to use larger CdZnTe detectors with a volume of 6 cm{sup 3}, because of the better surface-to-volume ratio and the higher full energy detection efficiency. This will also reduce the background contribution of surface contaminations. Before the installation at the LNGS the new detector design is validated and studied in detail. This talk presents a laboratory experiment for the characterization with γ-radiation of 6 cm{sup 3} CdZnTe quad-grid detectors. The anode of such a detector is divided into four sub-CPGs. The characterization routine consists of the determination of the optimal working point and two-dimensional spatially resolved scans with a highly collimated γ-source.

  2. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  3. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  4. Exploring Transition of Large Technological Systems through Relational Data - A Study of The Danish Smart Grid Development

    DEFF Research Database (Denmark)

    Jurowetzki, Roman

    2016-01-01

    in the transformation process. While they can contribute with resources, capabilities, and their connections to the development of the new grid infrastructure, they may also impede innovation given their ownership of and assumed interest in the established system. These insights should be considered in policy...... transformation of the energy grid infrastructure. The focus is set on how the interplay between established and new technologies and actors determines the direction and outcomes of innovation in large technological systems (such as the Danish smart grid). Results of several chapters indicate that in the Danish......Combining elements form the Science, Technology and Society (STS) tradition with the Technological Innovation System (TIS) framework and utilising unstructured and relational data as well as novel analysis tools, this thesis explores the development of the Danish smart grid and the associated...

  5. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  6. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  7. Large scale and low latency analysis facilities for the CMS experiment: development and operational aspects

    CERN Document Server

    Riahi, Hassen

    2010-01-01

    While a majority of CMS data analysis activities rely on the distributed computing infrastructure on the WLCG Grid, dedicated local computing facilities have been deployed to address particular requirements in terms of latency and scale. The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workfows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast turnaround. In order to reach the goal for fast turnaround tasks, the Workload Management group has designed a CRABServer based system to fit with two main needs: to provide a simple, familiar interface to the user (as used in the CRAB Analysis Tool[7]) and to allow an easy transition to the Tier-0 system. While the CRABServer component had been initially designed for Grid analysis by CMS end-users, with a few modifications it turned out to be also a very powerful service to manage and monitor local submissions on the CAF. Tran...

  8. Air Pollution Monitoring and Mining Based on Sensor Grid in London

    OpenAIRE

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-01-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We pr...

  9. Control and Optimization Methods for Electric Smart Grids

    CERN Document Server

    Ilić, Marija

    2012-01-01

    Control and Optimization Methods for Electric Smart Grids brings together leading experts in power, control and communication systems,and consolidates some of the most promising recent research in smart grid modeling,control and optimization in hopes of laying the foundation for future advances in this critical field of study. The contents comprise eighteen essays addressing wide varieties of control-theoretic problems for tomorrow’s power grid. Topics covered include: Control architectures for power system networks with large-scale penetration of renewable energy and plug-in vehicles Optimal demand response New modeling methods for electricity markets Control strategies for data centers Cyber-security Wide-area monitoring and control using synchronized phasor measurements. The authors present theoretical results supported by illustrative examples and practical case studies, making the material comprehensible to a wide audience. The results reflect the exponential transformation that today’s grid is going...

  10. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  11. Boosting the adoption and the reliability of renewable energy sources: Mitigating the large-scale wind power intermittency through vehicle to grid technology

    International Nuclear Information System (INIS)

    Zhao, Yang; Noori, Mehdi; Tatari, Omer

    2017-01-01

    The integration of wind energy in the electricity sector and the adoption of electric vehicles in the transportation sector both have the potential to significantly reduce greenhouse gas emissions individually as well as in tandem with Vehicle-to-Grid technology. This study aims to evaluate the greenhouse gas emission savings of mitigating intermittency resulting from the introduction of wind power through Vehicle-to-Grid technologies, as well as the extent to which the marginal electricity consumption from charging an electric vehicle fleet may weaken this overall environmental benefit. To this end, the comparisons are conducted in seven independent system operator regions. The results indicate that, in most cases, the emission savings of a combination of wind power and Vehicle-to-Grid technology outweighs the additional emissions from marginal electricity generation for electric vehicles. In addition, the fluctuations in newly-integrated wind power could be balanced in the future using EVs and V2G technology, provided that a moderate portion of EV owners is willing to provide V2G services. On the other hand, such a combination is not favorable if the Vehicle-to-Grid service participation rate is less than 5% of all electric vehicle owners within a particular region. - Highlights: • The environmental benefit of vehicle to grid systems as grid stabilizer is analyzed. • Emission savings of vehicle to grid and impacts of electric vehicles are compared. • Seven independent system operator regions are studied. • Uncertainty and sensitivity analysis are performed through a Monte Carlo Simulation.

  12. State of the Art in Large-Scale Soil Moisture Monitoring

    Science.gov (United States)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  13. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  14. Large scale development of wind power. Consequences for the national grid and the need for load balancing; Storskalig utbyggnad av vindkraft. Konsekvenser foer stamnaetet och behovet av reglerkraft

    Energy Technology Data Exchange (ETDEWEB)

    2008-06-15

    Wind power is expected to growth rapidly in Sweden. The existing certificate system gives economic incentives for development of 17 TWh from renewable energy sources until 2016, compared to the 2002 level. The Swedish Energy Agency estimates that 9 TWh wind power will be built by 2020, given the present certificate system. However, a new planning goal of 30 TWh wind energy by 2020 has been proposed by the Agency. It is very important for Svenska Kraftnaet to follow the development in order to take the right actions to adapt the national grid to the increased share of wind power. The total increased need for balancing power is estimated to be: 1 400-1 800 MW for 10 TWh added wind power, and 4 300-5 300 MW for 30 TWh. About 15% of the increased balancing need must be assigned to automatically frequency regulating generation. The rest can be made up of sources that can be regulated on a minute- or hour-scale. The planned wind power risks to replace generation with regulating capacity, and it is important to continuously analyze if and how this happens, and which the consequences will be for the balancing capacity. The socio-economic effects for the national grid include increased investment cost and increased costs foe balancing and regulating. Massive expansion in North Sweden is the most costly alternative, with a capitalized cost estimated to 25 000 MSEK (about 4 000 MUSD) at an expansion of 30 TWh wind power. This can be compared to the estimated investment cost for the wind power expansion of 150 000 MSEK

  15. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  16. Scheduling in Heterogeneous Grid Environments: The Effects of DataMigration

    Energy Technology Data Exchange (ETDEWEB)

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Smith, Warren

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this goal can be fully realized. One problem critical to the effective utilization of computational grids is efficient job scheduling. Our prior work addressed this challenge by defining a grid scheduling architecture and several job migration strategies. The focus of this study is to explore the impact of data migration under a variety of demanding grid conditions. We evaluate our grid scheduling algorithms by simulating compute servers, various groupings of servers into sites, and inter-server networks, using real workloads obtained from leading supercomputing centers. Several key performance metrics are used to compare the behavior of our algorithms against reference local and centralized scheduling schemes. Results show the tremendous benefits of grid scheduling, even in the presence of input/output data migration - while highlighting the importance of utilizing communication-aware scheduling schemes.

  17. Large-scale utilization of wind power in China: Obstacles of conflict between market and planning

    International Nuclear Information System (INIS)

    Zhao Xiaoli; Wang Feng; Wang Mei

    2012-01-01

    The traditional strict planning system that regulates China's power market dominates power industry operations. However, a series of market-oriented reforms since 1997 call for more decentralized decision-making by individual market participants. Moreover, with the rapid growth of wind power in China, the strict planning system has become one of the significant factors that has curtailed the generation of wind power, which contradicts with the original purpose of using the government's strong control abilities to promote wind power development. In this paper, we first present the reasons why market mechanisms are important for large-scale utilization of wind power by using a case analysis of the Northeast Grid, and then we illustrate the impact of conflicts between strict planning and market mechanisms on large-scale wind power utilization. Last, we explore how to promote coordination between markets and planning to realize large-scale wind power utilization in China. We argue that important measures include implementing flexible power pricing mechanisms instead of the current fixed pricing approach, formulating a more reasonable mechanism for distributing benefits and costs, and designing an appropriate market structure for large-scale wind power utilization to promote market liquidity and to send clear market equilibrium signals. - Highlights: ► We present the reasons why market is important for utilization of wind power. ► We discuss the current situation of the conflict between planning and market. ► We study the impact of conflict between planning and market on wind power output. ► We argue how to promote coordination between market and planning.

  18. QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.

    Directory of Open Access Journals (Sweden)

    Mario Inostroza-Ponta

    Full Text Available BACKGROUND: The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. METHODOLOGY/PRINCIPAL FINDINGS: We present a new data visualization approach (QAPgrid that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. CONCLUSIONS/SIGNIFICANCE: Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on

  19. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  20. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  1. Failure probability analysis of optical grid

    Science.gov (United States)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  2. Puzzles of large scale structure and gravitation

    International Nuclear Information System (INIS)

    Sidharth, B.G.

    2006-01-01

    We consider the puzzle of cosmic voids bounded by two-dimensional structures of galactic clusters as also a puzzle pointed out by Weinberg: How can the mass of a typical elementary particle depend on a cosmic parameter like the Hubble constant? An answer to the first puzzle is proposed in terms of 'Scaled' Quantum Mechanical like behaviour which appears at large scales. The second puzzle can be answered by showing that the gravitational mass of an elementary particle has a Machian character (see Ahmed N. Cantorian small worked, Mach's principle and the universal mass network. Chaos, Solitons and Fractals 2004;21(4))

  3. Large-eddy simulation of plume dispersion within regular arrays of cubic buildings

    Science.gov (United States)

    Nakayama, H.; Jurcakova, K.; Nagai, H.

    2011-04-01

    There is a potential problem that hazardous and flammable materials are accidentally or intentionally released within populated urban areas. For the assessment of human health hazard from toxic substances, the existence of high concentration peaks in a plume should be considered. For the safety analysis of flammable gas, certain critical threshold levels should be evaluated. Therefore, in such a situation, not only average levels but also instantaneous magnitudes of concentration should be accurately predicted. In this study, we perform Large-Eddy Simulation (LES) of plume dispersion within regular arrays of cubic buildings with large obstacle densities and investigate the influence of the building arrangement on the characteristics of mean and fluctuating concentrations.

  4. Unstructured grid modelling of offshore wind farm impacts on seasonally stratified shelf seas

    Science.gov (United States)

    Cazenave, Pierre William; Torres, Ricardo; Allen, J. Icarus

    2016-06-01

    Shelf seas comprise approximately 7% of the world's oceans and host enormous economic activity. Development of energy installations (e.g. Offshore Wind Farms (OWFs), tidal turbines) in response to increased demand for renewable energy requires a careful analysis of potential impacts. Recent remote sensing observations have identified kilometre-scale impacts from OWFs. Existing modelling evaluating monopile impacts has fallen into two camps: small-scale models with individually resolved turbines looking at local effects; and large-scale analyses but with sub-grid scale turbine parameterisations. This work straddles both scales through a 3D unstructured grid model (FVCOM): wind turbine monopiles in the eastern Irish Sea are explicitly described in the grid whilst the overall grid domain covers the south-western UK shelf. Localised regions of decreased velocity extend up to 250 times the monopile diameter away from the monopile. Shelf-wide, the amplitude of the M2 tidal constituent increases by up to 7%. The turbines enhance localised vertical mixing which decreases seasonal stratification. The spatial extent of this extends well beyond the turbines into the surrounding seas. With significant expansion of OWFs on continental shelves, this work highlights the importance of how OWFs may impact coastal (e.g. increased flooding risk) and offshore (e.g. stratification and nutrient cycling) areas.

  5. Applying Hillslope Hydrology to Bridge between Ecosystem and Grid-Scale Processes within an Earth System Model

    Science.gov (United States)

    Subin, Z. M.; Sulman, B. N.; Malyshev, S.; Shevliakova, E.

    2013-12-01

    Soil moisture is a crucial control on surface energy fluxes, vegetation properties, and soil carbon cycling. Its interactions with ecosystem processes are highly nonlinear across a large range, as both drought stress and anoxia can impede vegetation and microbial growth. Earth System Models (ESMs) generally only represent an average soil-moisture state in grid cells at scales of 50-200 km, and as a result are not able to adequately represent the effects of subgrid heterogeneity in soil moisture, especially in regions with large wetland areas. We addressed this deficiency by developing the first ESM-coupled subgrid hillslope-hydrological model, TiHy (Tiled-hillslope Hydrology), embedded within the Geophysical Fluid Dynamics Laboratory (GFDL) land model. In each grid cell, one or more representative hillslope geometries are discretized into land model tiles along an upland-to-lowland gradient. These geometries represent ~1 km hillslope-scale hydrological features and allow for flexible representation of hillslope profile and plan shapes, in addition to variation of subsurface properties among or within hillslopes. Each tile (which may represent ~100 m along the hillslope) has its own surface fluxes, vegetation state, and vertically-resolved state variables for soil physics and biogeochemistry. Resolution of water state in deep layers (~200 m) down to bedrock allows for physical integration of groundwater transport with unsaturated overlying dynamics. Multiple tiles can also co-exist at the same vertical position along the hillslope, allowing the simulation of ecosystem heterogeneity due to disturbance. The hydrological model is coupled to the vertically-resolved Carbon, Organisms, Respiration, and Protection in the Soil Environment (CORPSE) model, which captures non-linearity resulting from interactions between vertically-heterogeneous soil carbon and water profiles. We present comparisons of simulated water table depth to observations. We examine sensitivities to

  6. Sub-grid-scale effects on short-wave instability in magnetized hall-MHD plasma

    International Nuclear Information System (INIS)

    Miura, H.; Nakajima, N.

    2010-11-01

    Aiming to clarify effects of short-wave modes on nonlinear evolution/saturation of the ballooning instability in the Large Helical Device, fully three-dimensional simulations of the single-fluid MHD and the Hall MHD equations are carried out. A moderate parallel heat conductivity plays an important role both in the two kinds of simulations. In the single-fluid MHD simulations, the parallel heat conduction effectively suppresses short-wave ballooning modes but it turns out that the suppression is insufficient in comparison to an experimental result. In the Hall MHD simulations, the parallel heat conduction triggers a rapid growth of the parallel flow and enhance nonlinear couplings. A comparison between single-fluid and the Hall MHD simulations reveals that the Hall MHD model does not necessarily improve the saturated pressure profile, and that we may need a further extension of the model. We also find by a comparison between two Hall MHD simulations with different numerical resolutions that sub-grid-scales of the Hall term should be modeled to mimic an inverse energy transfer in the wave number space. (author)

  7. A Grid Voltage Measurement Method for Wind Power Systems during Grid Fault Conditions

    OpenAIRE

    Yoo, Cheol-Hee; Chung, Il-Yop; Yoo, Hyun-Jae; Hong, Sung-Soo

    2014-01-01

    Grid codes in many countries require low-voltage ride-through (LVRT) capability to maintain power system stability and reliability during grid fault conditions. To meet the LVRT requirement, wind power systems must stay connected to the grid and also supply reactive currents to the grid to support the recovery from fault voltages. This paper presents a new fault detection method and inverter control scheme to improve the LVRT capability for full-scale permanent magnet synchronous generator (P...

  8. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  9. WISDOM-II: Screening against multiple targets implicated in malaria using computational grid infrastructures

    Directory of Open Access Journals (Sweden)

    Kenyon Colin

    2009-05-01

    Full Text Available Abstract Background Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Motivation Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR, and on a new promising one, glutathione-S-transferase. Methods In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. Results On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. Conclusion The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software

  10. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  11. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    International Nuclear Information System (INIS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-01-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  12. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for peta scale platforms and beyond

    International Nuclear Information System (INIS)

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-01-01

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors)

  13. Large-scale laboratory study of breaking wave hydrodynamics over a fixed bar

    Science.gov (United States)

    van der A, Dominic A.; van der Zanden, Joep; O'Donoghue, Tom; Hurther, David; Cáceres, Iván.; McLelland, Stuart J.; Ribberink, Jan S.

    2017-04-01

    A large-scale wave flume experiment has been carried out involving a T = 4 s regular wave with H = 0.85 m wave height plunging over a fixed barred beach profile. Velocity profiles were measured at 12 locations along the breaker bar using LDA and ADV. A strong undertow is generated reaching magnitudes of 0.8 m/s on the shoreward side of the breaker bar. A circulation pattern occurs between the breaking area and the inner surf zone. Time-averaged turbulent kinetic energy (TKE) is largest in the breaking area on the shoreward side of the bar where the plunging jet penetrates the water column. At this location, and on the bar crest, TKE generated at the water surface in the breaking process reaches the bottom boundary layer. In the breaking area, TKE does not reduce to zero within a wave cycle which leads to a high level of "residual" turbulence and therefore lower temporal variation in TKE compared to previous studies of breaking waves on plane beach slopes. It is argued that this residual turbulence results from the breaker bar-trough geometry, which enables larger length scales and time scales of breaking-generated vortices and which enhances turbulence production within the water column compared to plane beaches. Transport of TKE is dominated by the undertow-related flux, whereas the wave-related and turbulent fluxes are approximately an order of magnitude smaller. Turbulence production and dissipation are largest in the breaker zone and of similar magnitude, but in the shoaling zone and inner surf zone production is negligible and dissipation dominates.

  14. Large-Scale Parallel Viscous Flow Computations using an Unstructured Multigrid Algorithm

    Science.gov (United States)

    Mavriplis, Dimitri J.

    1999-01-01

    The development and testing of a parallel unstructured agglomeration multigrid algorithm for steady-state aerodynamic flows is discussed. The agglomeration multigrid strategy uses a graph algorithm to construct the coarse multigrid levels from the given fine grid, similar to an algebraic multigrid approach, but operates directly on the non-linear system using the FAS (Full Approximation Scheme) approach. The scalability and convergence rate of the multigrid algorithm are examined on the SGI Origin 2000 and the Cray T3E. An argument is given which indicates that the asymptotic scalability of the multigrid algorithm should be similar to that of its underlying single grid smoothing scheme. For medium size problems involving several million grid points, near perfect scalability is obtained for the single grid algorithm, while only a slight drop-off in parallel efficiency is observed for the multigrid V- and W-cycles, using up to 128 processors on the SGI Origin 2000, and up to 512 processors on the Cray T3E. For a large problem using 25 million grid points, good scalability is observed for the multigrid algorithm using up to 1450 processors on a Cray T3E, even when the coarsest grid level contains fewer points than the total number of processors.

  15. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    Science.gov (United States)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  16. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  17. Interharmonics from Grid-Connected PV Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Sera, Dezso

    2017-01-01

    As the penetration level of grid-connected Photovoltaic (PV) systems increases, the power quality is one of the major concerns for system operators and the demands are becoming even stricter. The impact of interharmonics on the grid has been acknowledged in recent research when considering a large......-scale adoption of PV inverters. However, the origins of interharmonics remain unclear. Thus, this paper performs tests on a commercial PV inverter to explore interharmonic generation and more important investigates the mechanism of interharmonic emission. The investigation reveals that the perturbation...... of the solutions. Simulation results indicate that the constant-voltage MPPT method is the most suitable solution to the mitigation of interharmonics introduced by the MPPT operation, as it avoids the perturbation in the PV voltage during operation....

  18. Development and application of a computer model for large-scale flame acceleration experiments

    International Nuclear Information System (INIS)

    Marx, K.D.

    1987-07-01

    A new computational model for large-scale premixed flames is developed and applied to the simulation of flame acceleration experiments. The primary objective is to circumvent the necessity for resolving turbulent flame fronts; this is imperative because of the relatively coarse computational grids which must be used in engineering calculations. The essence of the model is to artificially thicken the flame by increasing the appropriate diffusivities and decreasing the combustion rate, but to do this in such a way that the burn velocity varies with pressure, temperature, and turbulence intensity according to prespecified phenomenological characteristics. The model is particularly aimed at implementation in computer codes which simulate compressible flows. To this end, it is applied to the two-dimensional simulation of hydrogen-air flame acceleration experiments in which the flame speeds and gas flow velocities attain or exceed the speed of sound in the gas. It is shown that many of the features of the flame trajectories and pressure histories in the experiments are simulated quite well by the model. Using the comparison of experimental and computational results as a guide, some insight is developed into the processes which occur in such experiments. 34 refs., 25 figs., 4 tabs

  19. Elementary dispersion analysis of some mimetic discretizations on triangular C-grids

    Energy Technology Data Exchange (ETDEWEB)

    Korn, P., E-mail: peter.korn@mpimet.mpg.de [Max Planck Institute for Meteorology, Hamburg (Germany); Danilov, S. [Alfred Wegener Institute for Polar and Marine Research, Bremerhaven (Germany); A.M. Obukhov Institute of Atmospheric Physics, Moscow (Russian Federation)

    2017-02-01

    Spurious modes supported by triangular C-grids limit their application for modeling large-scale atmospheric and oceanic flows. Their behavior can be modified within a mimetic approach that generalizes the scalar product underlying the triangular C-grid discretization. The mimetic approach provides a discrete continuity equation which operates on an averaged combination of normal edge velocities instead of normal edge velocities proper. An elementary analysis of the wave dispersion of the new discretization for Poincaré, Rossby and Kelvin waves shows that, although spurious Poincaré modes are preserved, their frequency tends to zero in the limit of small wavenumbers, which removes the divergence noise in this limit. However, the frequencies of spurious and physical modes become close on shorter scales indicating that spurious modes can be excited unless high-frequency short-scale motions are effectively filtered in numerical codes. We argue that filtering by viscous dissipation is more efficient in the mimetic approach than in the standard C-grid discretization. Lumping of mass matrices appearing with the velocity time derivative in the mimetic discretization only slightly reduces the accuracy of the wave dispersion and can be used in practice. Thus, the mimetic approach cures some difficulties of the traditional triangular C-grid discretization but may still need appropriately tuned viscosity to filter small scales and high frequencies in solutions of full primitive equations when these are excited by nonlinear dynamics.

  20. Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids

    Science.gov (United States)

    Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.

    2017-12-01

    Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and