WorldWideScience

Sample records for electricity large scale

  1. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...... with high wind power penetration. This paper presents a review of the electricity storage technologies relevant for large power systems. The paper also presents an estimation of the economic feasibility of electricity storage using the west Danish power market area as a case....

  2. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Science.gov (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  3. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik

    2013-01-01

    was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power...... with this imbalance and to reduce its high dependence on oil production. For this reason, it is interesting to analyse the extent to which transport electrification can further the renewable energy integration. This paper quantifies this issue in Inner Mongolia, where the share of wind power in the electricity supply...... integration by 8%. The application of EVs benefits from saving both energy system cost and fuel cost. However, the negative consequences of decreasing energy system efficiency and increasing the CO2 emission should be noted when applying the hydrogen fuel cell vehicle (HFCV). The results also indicate...

  4. Bulgarian electricity market and the large-scale industrial customers

    International Nuclear Information System (INIS)

    Popov, P.; Kanev, K.; Dyankov, M.; Minkov, N.

    2003-01-01

    The paper focuses on a brief overview of the Bulgarian Electricity Market Design and steps toward its development, as well as on preliminary analyses for market opening and influence of large industrial customers to system and market operation. (author)

  5. Electricity prices, large-scale renewable integration, and policy implications

    International Nuclear Information System (INIS)

    Kyritsis, Evangelos; Andersson, Jonas; Serletis, Apostolos

    2017-01-01

    This paper investigates the effects of intermittent solar and wind power generation on electricity price formation in Germany. We use daily data from 2010 to 2015, a period with profound modifications in the German electricity market, the most notable being the rapid integration of photovoltaic and wind power sources, as well as the phasing out of nuclear energy. In the context of a GARCH-in-Mean model, we show that both solar and wind power Granger cause electricity prices, that solar power generation reduces the volatility of electricity prices by scaling down the use of peak-load power plants, and that wind power generation increases the volatility of electricity prices by challenging electricity market flexibility. - Highlights: • We model the impact of solar and wind power generation on day-ahead electricity prices. • We discuss the different nature of renewables in relation to market design. • We explore the impact of renewables on the distributional properties of electricity prices. • Solar and wind reduce electricity prices but affect price volatility in the opposite way. • Solar decreases the probability of electricity price spikes, while wind increases it.

  6. Flat-Land Large-Scale Electricity Storage (FLES

    Directory of Open Access Journals (Sweden)

    Schalij R.

    2012-10-01

    Full Text Available Growth of renewable sources requires a smarter electricity grid, integrating multiple solutions for large scale storage. Pumped storage still is the most valid option. The capacity of existing facilities is not sufficient to accommodate future renewable resources. New locations for additional pumped storage capacity are scarce. Mountainous areas mostly are remote and do not allow construction of large facilities for ecological reasons. In the Netherlands underground solutions were studied for many years. The use of (former coal mines was rejected after scientific research. Further research showed that solid rock formations below the (unstable coal layers can be harnessed to excavate the lower water reservoir for pumped storage, making an innovative underground solution possible. A complete plan was developed, with a capacity of 1400 MW (8 GWh daily output and a head of 1400 m. It is technically and economically feasible. Compared to conventional pumped storage it has significantly less impact on the environment. Less vulnerable locations are eligible. The reservoir on the surface (only one instead of two is relatively small. It offers also a solution for other European countries. The Dutch studies provide a valuable basis for new locations.

  7. Entirely renewable energy-based electricity supply system (small scale and large scale)

    Energy Technology Data Exchange (ETDEWEB)

    Zahedi, A. [Monash University, Caulfield (Australia). Division of Electrical and Computer Systems Engineering

    1996-09-01

    Our future energy needs will be supplied by a combination of many different sources ranging from small wind turbine to provide power for a single house to central power stations that provide power in very large scale fed into the national grid. Computer control systems will integrate the performance of all these systems to make sure that as much power as possible comes from environmentally friendlier sources. As alternative sources becomes more widely available, small scale systems meeting local needs may start to replace current large scale central power stations. The author is investigating the feasibility of an entirely renewable energy-based electricity supply system. The developed system find so many applications as it can be used as small scale power system for Remote Area Power Supply, wind energy/battery or solar energy/battery, as well as large scale for interconnection with national grid. (Author)

  8. Electricity Prices, Large-Scale Renewable Integration, and Policy Implications

    OpenAIRE

    Kyritsis, Evangelos; Andersson, Jonas; Serletis, Apostolos

    2016-01-01

    This paper investigates the effects of intermittent solar and wind power generation on electricity price formation in Germany. We use daily data from 2010 to 2015, a period with profound modifications in the German electricity market, the most notable being the rapid integration of photovoltaic and wind power sources, as well as the phasing out of nuclear energy. In the context of a GARCH-in-Mean model, we show that both solar and wind power Granger cause electricity prices, that solar power ...

  9. Analysis for Large Scale Integration of Electric Vehicles into Power Grids

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Wang, Xiaoru

    2011-01-01

    Electric Vehicles (EVs) provide a significant opportunity for reducing the consumption of fossil energies and the emission of carbon dioxide. With more and more electric vehicles integrated in the power systems, it becomes important to study the effects of EV integration on the power systems......, especially the low and middle voltage level networks. In the paper, the basic structure and characteristics of the electric vehicles are introduced. The possible impacts of large scale integration of electric vehicles on the power systems especially the advantage to the integration of the renewable energies...... are discussed. Finally, the research projects related to the large scale integration of electric vehicles into the power systems are introduced, it will provide reference for large scale integration of Electric Vehicles into power grids....

  10. Analysis for Large Scale Integration of Electric Vehicles into Power Grids

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Wang, Xiaoru

    2011-01-01

    , especially the low and middle voltage level networks. In the paper, the basic structure and characteristics of the electric vehicles are introduced. The possible impacts of large scale integration of electric vehicles on the power systems especially the advantage to the integration of the renewable energies......Electric Vehicles (EVs) provide a significant opportunity for reducing the consumption of fossil energies and the emission of carbon dioxide. With more and more electric vehicles integrated in the power systems, it becomes important to study the effects of EV integration on the power systems...... are discussed. Finally, the research projects related to the large scale integration of electric vehicles into the power systems are introduced, it will provide reference for large scale integration of Electric Vehicles into power grids....

  11. Large Scale Deployment of Electric Vehicles (EVs) and Heat Pumps (HPs) in the Nordic Region

    DEFF Research Database (Denmark)

    Liu, Zhaoxi; Wu, Qiuwei; Petersen, Pauli Fríðheim

    This report describes the study results of large scale deployment of electric vehicles (EVs) and heat pumps (HPs) in the Nordic countries of Denmark, Norway, Sweden and Finland, focusing on the demand profiles with high peneration of EVs and HPs in 2050......This report describes the study results of large scale deployment of electric vehicles (EVs) and heat pumps (HPs) in the Nordic countries of Denmark, Norway, Sweden and Finland, focusing on the demand profiles with high peneration of EVs and HPs in 2050...

  12. Electricity network limitations on large-scale deployment of wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Fairbairn, R.J.

    1999-07-01

    This report sought to identify limitation on large scale deployment of wind energy in the UK. A description of the existing electricity supply system in England, Scotland and Wales is given, and operational aspects of the integrated electricity networks, licence conditions, types of wind turbine generators, and the scope for deployment of wind energy in the UK are addressed. A review of technical limitations and technical criteria stipulated by the Distribution and Grid Codes, the effects of system losses, and commercial issues are examined. Potential solutions to technical limitations are proposed, and recommendations are outlined.

  13. Survey and analysis of selected jointly owned large-scale electric utility storage projects

    Energy Technology Data Exchange (ETDEWEB)

    1982-05-01

    The objective of this study was to examine and document the issues surrounding the curtailment in commercialization of large-scale electric storage projects. It was sensed that if these issues could be uncovered, then efforts might be directed toward clearing away these barriers and allowing these technologies to penetrate the market to their maximum potential. Joint-ownership of these projects was seen as a possible solution to overcoming the major barriers, particularly economic barriers, of commercializaton. Therefore, discussions with partners involved in four pumped storage projects took place to identify the difficulties and advantages of joint-ownership agreements. The four plants surveyed included Yards Creek (Public Service Electric and Gas and Jersey Central Power and Light); Seneca (Pennsylvania Electric and Cleveland Electric Illuminating Company); Ludington (Consumers Power and Detroit Edison, and Bath County (Virginia Electric Power Company and Allegheny Power System, Inc.). Also investigated were several pumped storage projects which were never completed. These included Blue Ridge (American Electric Power); Cornwall (Consolidated Edison); Davis (Allegheny Power System, Inc.) and Kttatiny Mountain (General Public Utilities). Institutional, regulatory, technical, environmental, economic, and special issues at each project were investgated, and the conclusions relative to each issue are presented. The major barriers preventing the growth of energy storage are the high cost of these systems in times of extremely high cost of capital, diminishing load growth and regulatory influences which will not allow the building of large-scale storage systems due to environmental objections or other reasons. However, the future for energy storage looks viable despite difficult economic times for the utility industry. Joint-ownership can ease some of the economic hardships for utilites which demonstrate a need for energy storage.

  14. Review of DC System Technologies for Large Scale Integration of Wind Energy Systems with Electricity Grids

    Directory of Open Access Journals (Sweden)

    Sheng Jie Shao

    2010-06-01

    Full Text Available The ever increasing development and availability of power electronic systems is the underpinning technology that enables large scale integration of wind generation plants with the electricity grid. As the size and power capacity of the wind turbine continues to increase, so is the need to place these significantly large structures at off-shore locations. DC grids and associated power transmission technologies provide opportunities for cost reduction and electricity grid impact minimization as the bulk power is concentrated at single point of entry. As a result, planning, optimization and impact can be studied and carefully controlled minimizing the risk of the investment as well as power system stability issues. This paper discusses the key technologies associated with DC grids for offshore wind farm applications.

  15. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.

    Science.gov (United States)

    Jensen, Tue V; Pinson, Pierre

    2017-11-28

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  16. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    Science.gov (United States)

    Jensen, Tue V.; Pinson, Pierre

    2017-11-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  17. PowerGrid - A Computation Engine for Large-Scale Electric Networks

    Energy Technology Data Exchange (ETDEWEB)

    Chika Nwankpa

    2011-01-31

    This Final Report discusses work on an approach for analog emulation of large scale power systems using Analog Behavioral Models (ABMs) and analog devices in PSpice design environment. ABMs are models based on sets of mathematical equations or transfer functions describing the behavior of a circuit element or an analog building block. The ABM concept provides an efficient strategy for feasibility analysis, quick insight of developing top-down design methodology of large systems and model verification prior to full structural design and implementation. Analog emulation in this report uses an electric circuit equivalent of mathematical equations and scaled relationships that describe the states and behavior of a real power system to create its solution trajectory. The speed of analog solutions is as quick as the responses of the circuit itself. Emulation therefore is the representation of desired physical characteristics of a real life object using an electric circuit equivalent. The circuit equivalent has within it, the model of a real system as well as the method of solution. This report presents a methodology of the core computation through development of ABMs for generators, transmission lines and loads. Results of ABMs used for the case of 3, 6, and 14 bus power systems are presented and compared with industrial grade numerical simulators for validation.

  18. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    DEFF Research Database (Denmark)

    Jensen, Tue Vissing; Pinson, Pierre

    2017-01-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured(R2.8) in detailby the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system......, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather......-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years.(R2.9) These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system.(R2.10) The spatial coverage, completeness and resolution of this dataset, open the door...

  19. Study on the structure and level of electricity prices for Northwest-European large-scale consumers

    International Nuclear Information System (INIS)

    2006-06-01

    The aim of the study on the title subject is to make an overview of the structure and developments of electricity prices for large-scale consumers in Northwest-Europe (Netherlands, Germany, Belgium and France) and of current regulations for large-scale consumers in Europe [nl

  20. Large-scale offshore wind energy. Cost analysis and integration in the Dutch electricity market

    International Nuclear Information System (INIS)

    De Noord, M.

    1999-02-01

    The results of analysis of the construction and integration costs of large-scale offshore wind energy (OWE) farms in 2010 are presented. The integration of these farms (1 and 5 GW) in the Dutch electricity distribution system have been regarded against the background of a liberalised electricity market. A first step is taken for the determination of costs involved in solving integration problems. Three different types of foundations are examined: the mono-pile, the jacket and a new type of foundation: the concrete caisson pile: all single-turbine-single-support structures. For real offshore applications (>10 km offshore, at sea-depths >20 m), the concrete caisson pile is regarded as the most suitable. The price/power ratios of wind turbines are analysed. It is assumed that in 2010 turbines in the power range of 3-5 MW are available. The main calculations have been conducted for a 3 MW turbine. The main choice in electrical infrastructure is for AC or DC. Calculations show that at distances of 30 km offshore and more, the use of HVDC will result in higher initial costs but lower operating costs. The share of operating and maintenance (O ampersand M) costs in the kWh cost price is approximately 3.3%. To be able to compare the two farms, a base case is derived with a construction time of 10 years for both. The energy yield is calculated for a wind regime offshore of 9.0 m/s annual mean wind speed. Per 3 MW turbine this results in an annual energy production of approximately 12 GWh. The total farm efficiency amounts to 82%, resulting in a total farm capacity factor of 38%. With a required internal rate of return of 15%, the kWh cost price amounts to 0.24 DFl and 0.21 DFl for the 1 GW and 5 GW farms respectively in the base case. The required internal rate of return has a large effect on the kWh cost price, followed by costs of subsystems. O ampersand M costs have little effect on the cost price. Parameter studies show that a small cost reduction of 5% is possible when

  1. Electricity Generation and Energy Cost Estimation of Large-Scale Wind Turbines in Jarandagh, Iran

    Directory of Open Access Journals (Sweden)

    Kasra Mohammadi

    2014-01-01

    Full Text Available Currently, wind energy utilization is being continuously growing so that it is regarded as a large contender of conventional fossil fuels. This study aimed at evaluating the feasibility of electricity generation using wind energy in Jarandagh situated in Qazvin Province in north-west part of Iran. The potential of wind energy in Jarandagh was investigated by analyzing the measured wind speed data between 2008 and 2009 at 40 m height. The electricity production and economic evaluation of four large-scale wind turbine models for operation at 70 m height were examined. The results showed that Jarandagh enjoys excellent potential for wind energy exploitation in 8 months of the year. The monthly wind power at 70 m height was in the range of 450.28–1661.62 W/m2, and also the annual wind power was 754.40 W/m2. The highest capacity factor was obtained using Suzlon S66/1.25 MW turbine model, while, in terms of electricity generation, Repower MM82/2.05 MW model showed the best performance with total annual energy output of 5705 MWh. The energy cost estimation results convincingly demonstrated that investing on wind farm construction using all nominated turbines is economically feasible and, among all turbines, Suzlon S66/1.25 MW model with energy cost of 0.0357 $/kWh is a better option.

  2. Monte Carlo techniques to analyse the electrical mismatch losses in large-scale photovoltaic generators

    Energy Technology Data Exchange (ETDEWEB)

    Iannone, F.; Sarno, A. [ENEA, Portici (Italy). Research Center; Noviello, G. [ENEA, Manfredonia (Italy). Mt. Aquilone Test-Side

    1998-02-01

    In large-scale photovoltaic generators, the arrangement of modules with different electrical characteristics could involve a considerable mismatch between the single components resulting in a power loss. This means the actual power is less than the sum of the maximum output powers of the individual PV modules, operating at the same irradiance-temperature conditions. To reduce the mismatch losses and to calculate it under operating conditions, a statistical approach based on Monte Carlo simulation techniques, has been developed and validated. The simulation model shows that it is possible to meet the required mismatch level, with a random arrangement, starting from a modules population characterized in terms of short circuit current, I{sub SC} and open circuit voltage V{sub OC}, by a probability density function with a imposed variance. The method has been successfully applied for a 100 kWp standard unit photovoltaic generator, the computational results have shown good agreement with the experimental data. (author)

  3. Electric Vehicle Based Battery Storages for Large Scale Wind Power Integration in Denmark

    DEFF Research Database (Denmark)

    Pillai, Jayakrishnan Radhakrishna

    . In Denmark, there are many hours of surplus wind power production every year. This could be consumed locally through demand side management of electric vehicles by controlled charging of their batteries. Also, the EV batteries could discharge the stored electricity to the grid on demand, which...... is improving on a rapid scale and the battery cost is also reducing which could enable the electric cars to be competitive in the market. The electric vehicles could also benefit the electricity sector in supporting more renewable energy which is also one of the most important driving forces in its promotion...... the clean wind energy and latter could be expensive and limited as the neighbouring countries are also installing more renewable energy across their borders. One of the other alternative solutions lies with the local distributed storages which could be provided by the flexible, efficient and quick start...

  4. China's large-scale power shortages of 2004 and 2011 after the electricity market reforms of 2002: Explanations and differences

    International Nuclear Information System (INIS)

    Ming, Zeng; Song, Xue; Lingyun, Li; Yuejin, Wang; Yang, Wei; Ying, Li

    2013-01-01

    Since the electricity market reforms of 2002, two large-scale power shortages, one occurring in 2004 and one in 2011, exerted a tremendous impact on the economic development of China and also gave rise to a fierce discussion regarding electricity system reforms. In this paper, the background and the influence scale of the two power shortages are described. Second, reasons for these two large-scale power shortages are analyzed from the perspectives of power generation, power consumption and coordination of power sources and grid network construction investments. Characteristics of these two large-scale power shortages are then summarized by comparatively analyzing the performance and the formation of the reasons behind these two large-scale power shortages. Finally, some effective measures that take into account the current status of electricity market reforms in China are suggested. This paper concludes that to eliminate power shortages in China, both the supply and the demand should be considered, and these considerations should be accompanied by supervisory policies and incentive mechanisms. - Highlights: • Reasons of these two large-scale power shortages are analyzed. • Characteristics of these two large-scale power shortages are summarized. • Some effective measures to eliminate power shortage are suggested

  5. Compressed Air Energy Storage – An Option for Medium to Large Scale Electrical-energy Storage

    OpenAIRE

    Budt, Marcus; Wolf, Daniel; Span, Roland; Yan, Jinyue

    2016-01-01

    This contribution presents the theoretical background of compressed air energy storage, examples for large scale application of this technology, chances and obstacles for its future development, and areas of research aiming at the development of commercially viable plants in the medium to large scale range.

  6. Electrical efficiency and renewable energy - Economical alternatives to large-scale power generation

    International Nuclear Information System (INIS)

    Oettli, B.; Hammer, S.; Moret, F.; Iten, R.; Nordmann, T.

    2010-05-01

    This final report for WWF Switzerland, Greenpeace Switzerland, the Swiss Energy Foundation SES, Pro Natura and the Swiss Cantons of Basel City and Geneva takes a look at the energy-relevant effects of the propositions made by Swiss electricity utilities for large-scale power generation. These proposals are compared with a strategy that proposes investments in energy-efficiency and the use of renewable sources of energy. The effects of both scenarios on the environment and the risks involved are discussed, as are the investments involved. The associated effects on the Swiss national economy are also discussed. For the efficiency and renewables scenario, two implementation variants are discussed: Inland investments and production are examined as are foreign production options and/or import from foreign countries. The methods used in the study are introduced and discussed. Investment and cost considerations, earnings and effects on employment are also reviewed. The report is completed with an extensive appendix which, amongst other things, includes potential reviews, cost estimates and a discussion on 'smart grids'

  7. Biomass Energy for Transport and Electricity: Large scale utilization under low CO2 concentration scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Luckow, Patrick; Wise, Marshall A.; Dooley, James J.; Kim, Son H.

    2010-01-25

    This paper examines the potential role of large scale, dedicated commercial biomass energy systems under global climate policies designed to stabilize atmospheric concentrations of CO2 at 400ppm and 450ppm. We use an integrated assessment model of energy and agriculture systems to show that, given a climate policy in which terrestrial carbon is appropriately valued equally with carbon emitted from the energy system, biomass energy has the potential to be a major component of achieving these low concentration targets. The costs of processing and transporting biomass energy at much larger scales than current experience are also incorporated into the modeling. From the scenario results, 120-160 EJ/year of biomass energy is produced by midcentury and 200-250 EJ/year by the end of this century. In the first half of the century, much of this biomass is from agricultural and forest residues, but after 2050 dedicated cellulosic biomass crops become the dominant source. A key finding of this paper is the role that carbon dioxide capture and storage (CCS) technologies coupled with commercial biomass energy can play in meeting stringent emissions targets. Despite the higher technology costs of CCS, the resulting negative emissions used in combination with biomass are a very important tool in controlling the cost of meeting a target, offsetting the venting of CO2 from sectors of the energy system that may be more expensive to mitigate, such as oil use in transportation. The paper also discusses the role of cellulosic ethanol and Fischer-Tropsch biomass derived transportation fuels and shows that both technologies are important contributors to liquid fuels production, with unique costs and emissions characteristics. Through application of the GCAM integrated assessment model, it becomes clear that, given CCS availability, bioenergy will be used both in electricity and transportation.

  8. Developing Large-Scale Bayesian Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft

    Science.gov (United States)

    Mengshoel, Ole Jakob; Poll, Scott; Kurtoglu, Tolga

    2009-01-01

    This CD contains files that support the talk (see CASI ID 20100021404). There are 24 models that relate to the ADAPT system and 1 Excel worksheet. In the paper an investigation into the use of Bayesian networks to construct large-scale diagnostic systems is described. The high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems are described in the talk. The data in the CD are the models of the 24 different power systems.

  9. Sampling of finite elements for sparse recovery in large scale 3D electrical impedance tomography

    International Nuclear Information System (INIS)

    Javaherian, Ashkan; Moeller, Knut; Soleimani, Manuchehr

    2015-01-01

    This study proposes a method to improve performance of sparse recovery inverse solvers in 3D electrical impedance tomography (3D EIT), especially when the volume under study contains small-sized inclusions, e.g. 3D imaging of breast tumours. Initially, a quadratic regularized inverse solver is applied in a fast manner with a stopping threshold much greater than the optimum. Based on assuming a fixed level of sparsity for the conductivity field, finite elements are then sampled via applying a compressive sensing (CS) algorithm to the rough blurred estimation previously made by the quadratic solver. Finally, a sparse inverse solver is applied solely to the sampled finite elements, with the solution to the CS as its initial guess. The results show the great potential of the proposed CS-based sparse recovery in improving accuracy of sparse solution to the large-size 3D EIT. (paper)

  10. Sampling of finite elements for sparse recovery in large scale 3D electrical impedance tomography.

    Science.gov (United States)

    Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut

    2015-01-01

    This study proposes a method to improve performance of sparse recovery inverse solvers in 3D electrical impedance tomography (3D EIT), especially when the volume under study contains small-sized inclusions, e.g. 3D imaging of breast tumours. Initially, a quadratic regularized inverse solver is applied in a fast manner with a stopping threshold much greater than the optimum. Based on assuming a fixed level of sparsity for the conductivity field, finite elements are then sampled via applying a compressive sensing (CS) algorithm to the rough blurred estimation previously made by the quadratic solver. Finally, a sparse inverse solver is applied solely to the sampled finite elements, with the solution to the CS as its initial guess. The results show the great potential of the proposed CS-based sparse recovery in improving accuracy of sparse solution to the large-size 3D EIT.

  11. On Electrical Design and Technical Performance Requirements for Large Scale Wind Farms

    DEFF Research Database (Denmark)

    Gordon, Mark; Keerthipala, W.; Fernando, A.

    2009-01-01

    plant operating limits for ensuring power system security at the high voltage point of connection. Experiences presented here refer mainly to few of the selected technical requirements and issues encountered during the process of wind farms connections into Eastern Australian power system. In particular......This paper presents and discusses technical performance requirements for connection of large scale wind turbine generating systems into HV transmission networks. Requirements have been presented for the purpose of achieving performance enhanced operation, reliability and assessment of the power...

  12. Solving large scale unit dilemma in electricity system by applying commutative law

    Science.gov (United States)

    Legino, Supriadi; Arianto, Rakhmat

    2018-03-01

    The conventional system, pooling resources with large centralized power plant interconnected as a network. provides a lot of advantages compare to the isolated one include optimizing efficiency and reliability. However, such a large plant need a huge capital. In addition, more problems emerged to hinder the construction of big power plant as well as its associated transmission lines. By applying commutative law of math, ab = ba, for all a,b €-R, the problem associated with conventional system as depicted above, can be reduced. The idea of having small unit but many power plants, namely “Listrik Kerakyatan,” abbreviated as LK provides both social and environmental benefit that could be capitalized by using proper assumption. This study compares the cost and benefit of LK to those of conventional system, using simulation method to prove that LK offers alternative solution to answer many problems associated with the large system. Commutative Law of Algebra can be used as a simple mathematical model to analyze whether the LK system as an eco-friendly distributed generation can be applied to solve various problems associated with a large scale conventional system. The result of simulation shows that LK provides more value if its plants operate in less than 11 hours as peaker power plant or load follower power plant to improve load curve balance of the power system. The result of simulation indicates that the investment cost of LK plant should be optimized in order to minimize the plant investment cost. This study indicates that the benefit of economies of scale principle does not always apply to every condition, particularly if the portion of intangible cost and benefit is relatively high.

  13. Demand Response and Economic Dispatch of Power Systems Considering Large-Scale Plug-in Hybrid Electric Vehicles/Electric Vehicles (PHEVs/EVs): A Review

    OpenAIRE

    Wei Gu; Haojun Yu; Wei Liu; Junpeng Zhu; Xiaohui Xu

    2013-01-01

    Increasing concerns about global environmental issues have led to the urgent development of green transportation. The enthusiasm of governments should encourage the prosperity of the plug-in hybrid electric vehicles/electric vehicles (PHEVs/EVs) industry in the near future. PHEVs/EVs are not only an alternative to gasoline but are also burgeoning units for power systems. The impact of large-scale PHEVs/EVs on power systems is of profound significance. This paper discusses how to use PHEVs/EVs...

  14. The resource curse: Analysis of the applicability to the large-scale export of electricity from renewable resources

    International Nuclear Information System (INIS)

    Eisgruber, Lasse

    2013-01-01

    The “resource curse” has been analyzed extensively in the context of non-renewable resources such as oil and gas. More recently commentators have expressed concerns that also renewable electricity exports can have adverse economic impacts on exporting countries. My paper analyzes to what extent the resource curse applies in the case of large-scale renewable electricity exports. I develop a “comprehensive model” that integrates previous works and provides a consolidated view of how non-renewable resource abundance impacts economic growth. Deploying this model I analyze through case studies on Laos, Mongolia, and the MENA region to what extent exporters of renewable electricity run into the danger of the resource curse. I find that renewable electricity exports avoid some disadvantages of non-renewable resource exports including (i) shocks after resource depletion; (ii) macroeconomic fluctuations; and (iii) competition for a fixed amount of resources. Nevertheless, renewable electricity exports bear some of the same risks as conventional resource exports including (i) crowding-out of the manufacturing sector; (ii) incentives for corruption; and (iii) reduced government accountability. I conclude with recommendations for managing such risks. - Highlights: ► Study analyzes whether the resource curse applies to renewable electricity export. ► I develop a “comprehensive model of the resource curse” and use cases for the analysis. ► Renewable electricity export avoids some disadvantages compared to other resources. ► Renewable electricity bears some of the same risks as conventional resources. ► Study concludes with recommendations for managing such risks

  15. Large-scale integration of renewable energy into international electricity markets

    DEFF Research Database (Denmark)

    Lund, Henrik

    2004-01-01

    has lead to excess electricity production and thus low prices on the Nord Pool electricity market. This paper describes how such problems can be avoided by the introduction of flexible energy systems including changes in the regulation of power plants and investments in heat pumps and heat storage......The paper presents the ability of different energy systems and regulation strategies to integrate renewable energy sources (RES) into the electricity supply system. The fluctuating electricity production from renewable energy must interact with the rest of the production units in order to make...... it possible for the system to secure a balance between supply and demand. At the same time most European electricity systems are in the process of being transformed into competitive electricity markets. Already today, the annual share of wind power in the western part of Denmark is nearly 20 percent, which...

  16. Lightweight electric-powered vehicles. Which financial incentives after the large-scale field tests at Mendrisio?

    International Nuclear Information System (INIS)

    Keller, M.; Frick, R.; Hammer, S.

    1999-08-01

    How should lightweight electric-powered vehicles be promoted, after the large-scale fleet test being conducted at Mendrisio (southern Switzerland) is completed in 2001, and are there reasons to put question marks behind the current approach? The demand for electric vehicles, and particularly the one in the automobile category, has remained at a persistently low level. As it proved, any appreciable improvement of this situation is almost impossible, even with substantial financial incentives. However, the unsatisfactory sales figures have little to do with the nature of the fleet test itself or with the specific conditions at Mendrisio. The problem is rather of structural nature. For (battery-operated) electric cars the main problem at present is the lack of an expanding market which could become self-supporting with only a few additional incentives. Various strategies have been evaluated. Two alternatives were considered in particular: a strategy to promote explicitly electric vehicles ('EL-strategy'), and a strategy to promote efficient road vehicles in general which would have to meet specific energy and environmental-efficiency criteria ('EF-strategy'). The EL-strategies make the following dilemma clear. If the aim is to raise the share of these vehicles up to 5% of all cars on the road (or even 8%) in a mid-term prospect, then substantial interventions in the relevant vehicle markets would be required, either with penalties for conventional cars, or a large-scale funding scheme, or interventions at the supply level. The study suggests a differentiated strategy with two components: (i) 'institutionalised' promotion with the aim of a substantial increase of the share of 'efficient' vehicles (independently of the propulsion technology), and (ii) the continuation of pilot and demonstration projects for the promotion of different types of innovative technologies. (author) [de

  17. Control and Protection in Low Voltage Grid with Large Scale Renewable Electricity Generation

    DEFF Research Database (Denmark)

    Mustafa, Ghullam

    of renewable energy based DGs are reduced CO2 emission, reduced operational cost as almost no fuel is used for their operation and less transmission and distribution losses as these units are normally built near to the load centers. This has also resulted in some operational challenges due to the unpredictable...... nature of such power generation sources. Some of the operational challenges include voltage variations due to power fluctuations coming from the DG units. On the other hand, it has also opened up some opportunities. One of the opportunities is islanding operation of the distribution system with DG unit(s......). Islanding is a situation where electrical system becomes electrically isolated from the rest of the power network and yet continues to be energized by the DG units connected to it. With the increased penetration of DG units, islanded operation of the distribution network is used to improve the reliability...

  18. Correction: Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4

    DEFF Research Database (Denmark)

    Jensen, Søren Højgaard; Graves, Christopher R.; Mogensen, Mogens Bjerg

    2017-01-01

    Correction for ‘Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4’ by S. H. Jensen et al., Energy Environ. Sci., 2015, 8, 2471–2479.......Correction for ‘Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4’ by S. H. Jensen et al., Energy Environ. Sci., 2015, 8, 2471–2479....

  19. Assessing Impact of Large-Scale Distributed Residential HVAC Control Optimization on Electricity Grid Operation and Renewable Energy Integration

    Science.gov (United States)

    Corbin, Charles D.

    Demand management is an important component of the emerging Smart Grid, and a potential solution to the supply-demand imbalance occurring increasingly as intermittent renewable electricity is added to the generation mix. Model predictive control (MPC) has shown great promise for controlling HVAC demand in commercial buildings, making it an ideal solution to this problem. MPC is believed to hold similar promise for residential applications, yet very few examples exist in the literature despite a growing interest in residential demand management. This work explores the potential for residential buildings to shape electric demand at the distribution feeder level in order to reduce peak demand, reduce system ramping, and increase load factor using detailed sub-hourly simulations of thousands of buildings coupled to distribution power flow software. More generally, this work develops a methodology for the directed optimization of residential HVAC operation using a distributed but directed MPC scheme that can be applied to today's programmable thermostat technologies to address the increasing variability in electric supply and demand. Case studies incorporating varying levels of renewable energy generation demonstrate the approach and highlight important considerations for large-scale residential model predictive control.

  20. Demand Response and Economic Dispatch of Power Systems Considering Large-Scale Plug-in Hybrid Electric Vehicles/Electric Vehicles (PHEVs/EVs: A Review

    Directory of Open Access Journals (Sweden)

    Xiaohui Xu

    2013-08-01

    Full Text Available Increasing concerns about global environmental issues have led to the urgent development of green transportation. The enthusiasm of governments should encourage the prosperity of the plug-in hybrid electric vehicles/electric vehicles (PHEVs/EVs industry in the near future. PHEVs/EVs are not only an alternative to gasoline but are also burgeoning units for power systems. The impact of large-scale PHEVs/EVs on power systems is of profound significance. This paper discusses how to use PHEVs/EVs as a useful new tool for system operation and regulation from a review of recent studies and mainly considers two mainstream methods: demand response and economic dispatch. The potential of using PHEVs/EVs to coordinate renewable energy resources is also discussed in terms of accepting more renewable resources without violating the safety and the reliability of power systems or increasing the operation cost significantly.

  1. Efficient and large scale synthesis of graphene from coal and its film electrical properties studies.

    Science.gov (United States)

    Wu, Yingpeng; Ma, Yanfeng; Wang, Yan; Huang, Lu; Li, Na; Zhang, Tengfei; Zhang, Yi; Wan, Xiangjian; Huang, Yi; Chen, Yongsheng

    2013-02-01

    Coal, which is abundant and has an incompact structure, is a good candidate to replace graphite as the raw material for the production of graphene. Here, a new solution phase technique for the preparation of graphene from coal has been developed. The precursor: graphene oxide got from coal was examined by atomic force microscopy, dynamic light scattering and X-ray diffraction, the results showed the GO was a small and single layer sheet. The graphene was examined by X-ray photoelectron spectroscopy, and Raman spectroscopy. Furthermore, graphene films have been prepared using direct solution process and the electrical conductivity and Hall effect have been studied. The results showed the conductivity of the films could reach as high as 2.5 x 10(5) Sm(-1) and exhibited an n-type behavior.

  2. Impacts of large-scale Intermittent Renewable Energy Sources on electricity systems, and how these can be modeled

    NARCIS (Netherlands)

    Brouwer, Anne Sjoerd; Van Den Broek, Machteld; Seebregts, Ad; Faaij, André

    The electricity sector in OECD countries is on the brink of a large shift towards low-carbon electricity generation. Power systems after 2030 may consist largely of two low-carbon generator types: Intermittent Renewable Energy Sources (IRES) such as wind and solar PV and thermal generators such as

  3. Neurite, a finite difference large scale parallel program for the simulation of electrical signal propagation in neurites under mechanical loading.

    Directory of Open Access Journals (Sweden)

    Julián A García-Grajales

    Full Text Available With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite--explicit and implicit--were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon

  4. Impact of large scale wind power on the Nordic electricity system

    International Nuclear Information System (INIS)

    Holttinen, Hannele

    2006-01-01

    Integration costs of wind power depend on how much wind power and where, and the power system: load, generation flexibility, interconnections. When wind power is added to a large interconnected power system there is considerable smoothing effect for the production. Increase of reserve requirements will stay at a low level. 10 percent penetration of wind power is not a problem in Nordic countries, as long as wind power is built to all 4 countries. Increasing the share of wind power will increase the integration costs. 20 percent penetration would need more flexibility in the system. That will not happen in the near future for Nordel, and the power system will probably also contain more flexible elements at that stage, like producing fuel for vehicles (ml)

  5. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  6. Large-Scale Single Particle and Cell Trapping based on Rotating Electric Field Induced-Charge Electroosmosis.

    Science.gov (United States)

    Wu, Yupan; Ren, Yukun; Tao, Ye; Hou, Likai; Jiang, Hongyuan

    2016-12-06

    We propose a simple, inexpensive microfluidic chip for large-scale trapping of single particles and cells based on induced-charge electroosmosis in a rotating electric field (ROT-ICEO). A central floating electrode array, was placed in the center of the gap between four driving electrodes with a quadrature configuration and used to immobilize single particles or cells. Cells were trapped on the electrode array by the interaction between ROT-ICEO flow and buoyancy flow. We experimentally optimized the efficiency of trapping single particles by investigating important parameters like particle or cell density and electric potential. Experimental and numerical results showed good agreement. The operation of the chip was verified by trapping single polystyrene (PS) microspheres with diameters of 5 and 20 μm and single yeast cells. The highest single particle occupancy of 73% was obtained using a floating electrode array with a diameter of 20 μm with an amplitude voltage of 5 V and frequency of 10 kHz for PS microbeads with a 5-μm diameter and density of 800 particles/μL. The ROT-ICEO flow could hold cells against fluid flows with a rate of less than 0.45 μL/min. This novel, simple, robust method to trap single cells has enormous potential in genetic and metabolic engineering.

  7. Theory and algorithms for solving large-scale numerical problems. Application to the management of electricity production

    International Nuclear Information System (INIS)

    Chiche, A.

    2012-01-01

    This manuscript deals with large-scale optimization problems, and more specifically with solving the electricity unit commitment problem arising at EDF. First, we focused on the augmented Lagrangian algorithm. The behavior of that algorithm on an infeasible convex quadratic optimization problem is analyzed. It is shown that the algorithm finds a point that satisfies the shifted constraints with the smallest possible shift in the sense of the Euclidean norm and that it minimizes the objective on the corresponding shifted constrained set. The convergence to such a point is realized at a global linear rate, which depends explicitly on the augmentation parameter. This suggests us a rule for determining the augmentation parameter to control the speed of convergence of the shifted constraint norm to zero. This rule has the advantage of generating bounded augmentation parameters even when the problem is infeasible. As a by-product, the algorithm computes the smallest translation in the Euclidean norm that makes the constraints feasible. Furthermore, this work provides solution methods for stochastic optimization industrial problems decomposed on a scenario tree, based on the progressive hedging algorithm introduced by [Rockafellar et Wets, 1991]. We also focus on the convergence of that algorithm. On the one hand, we offer a counter-example showing that the algorithm could diverge if its augmentation parameter is iteratively updated. On the other hand, we show how to recover the multipliers associated with the non-dualized constraints defined on the scenario tree from those associated with the corresponding constraints of the scenario subproblems. Their convergence is also analyzed for convex problems. The practical interest of theses solutions techniques is corroborated by numerical experiments performed on the electric production management problem. We apply the progressive hedging algorithm to a realistic industrial problem. More precisely, we solve the French medium

  8. The role of large scale energy storage systems in the electricity grid of the Netherlands in 2050

    NARCIS (Netherlands)

    Velthuis, Martin

    2012-01-01

    SUMMARY The burning of the fossil fuels for electricity generation has an environmental impact on a global scale. Also, the world is going to be running out of the fossil fuels before or within the next century. This is the reason why renewable energy so

  9. Estimating the electricity prices, generation costs and CO2 emissions of large scale wind energy exports from Ireland to Great Britain

    International Nuclear Information System (INIS)

    Cleary, Brendan; Duffy, Aidan; Bach, Bjarne; Vitina, Aisma; O’Connor, Alan; Conlon, Michael

    2016-01-01

    The share of wind generation in the Irish and British electricity markets is set to increase by 2020 due to renewable energy (RE) targets. The United Kingdom (UK) and Ireland have set ambitious targets which require 30% and 40% of electricity demand to come from RE, mainly wind, by 2020, respectively. Ireland has sufficient indigenous onshore wind energy resources to exceed the RE target, while the UK faces uncertainty in achieving its target. A possible solution for the UK is to import RE directly from large scale onshore and offshore wind energy projects in Ireland; this possibility has recently been explored by both governments but is currently on hold. Thus, the aim of this paper is to estimate the effects of large scale wind energy in the Irish and British electricity markets in terms of wholesale system marginal prices, total generation costs and CO 2 emissions. The results indicate when the large scale Irish-based wind energy projects are connected directly to the UK there is a decrease of 0.6% and 2% in the Irish and British wholesale system marginal prices under the UK National Grid slow progression scenario, respectively. - Highlights: • Modelling the Irish and British electricity markets. • Investigating the impacts of large scale wind energy within the markets. • Results indicate a reduction in wholesale system marginal prices in both markets. • Decrease in total generation costs and CO 2 emissions in both markets.

  10. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESUL...

  11. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  12. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  13. Impact of incomplete metal coverage on the electrical properties of metal-CNT contacts: A large-scale ab initio study

    Energy Technology Data Exchange (ETDEWEB)

    Fediai, Artem, E-mail: artem.fediai@nano.tu-dresden.de; Ryndyk, Dmitry A. [Institute for Materials Science and Max Bergman Center of Biomaterials, TU Dresden, 01062 Dresden (Germany); Center for Advancing Electronics Dresden, TU Dresden, 01062 Dresden (Germany); Seifert, Gotthard [Theoretical Chemistry, TU Dresden, 01062 Dresden (Germany); Center for Advancing Electronics Dresden, TU Dresden, 01062 Dresden (Germany); Dresden Center for Computational Materials Science, TU Dresden, 01062 Dresden (Germany); Mothes, Sven; Schroter, Michael; Claus, Martin [Chair for Electron Devices and Integrated Circuits, TU Dresden, 01062 Dresden (Germany); Center for Advancing Electronics Dresden, TU Dresden, 01062 Dresden (Germany); Cuniberti, Gianaurelio [Institute for Materials Science and Max Bergman Center of Biomaterials, TU Dresden, 01062 Dresden (Germany); Center for Advancing Electronics Dresden, TU Dresden, 01062 Dresden (Germany); Dresden Center for Computational Materials Science, TU Dresden, 01062 Dresden (Germany)

    2016-09-05

    Using a dedicated combination of the non-equilibrium Green function formalism and large-scale density functional theory calculations, we investigated how incomplete metal coverage influences two of the most important electrical properties of carbon nanotube (CNT)-based transistors: contact resistance and its scaling with contact length, and maximum current. These quantities have been derived from parameter-free simulations of atomic systems that are as close as possible to experimental geometries. Physical mechanisms that govern these dependences have been identified for various metals, representing different CNT-metal interaction strengths from chemisorption to physisorption. Our results pave the way for an application-oriented design of CNT-metal contacts.

  14. Personal attitudes towards large-scale technologies. The perception of risks and benefits of electricity generation with uranium and coal

    Energy Technology Data Exchange (ETDEWEB)

    Midden, C.J.H.; Daamen, D.D.L.; Verplanken, B.

    1984-06-01

    The distribution of attitudes towards the large-scale application of coal and uranium, belief systems underlying these attitudes, the perceived probabilities of a number of consequences from these energy sources and the consequences of these attitudes for behaviour and behavioural intentions are discussed. Attention is paid to other aspects of people's evaluations of these energy technologies: involvement with the problems perceived, personal effectivity to influence collective decisions, information acquisition and level, imaginability of accidents, anxiety, reactions to local plants. The study has been designed following an extended and adapted version of the attitude-behaviour model by Fishbein and Ajzen.

  15. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  16. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... based on measurements on the Marstal plant, Denmark, and through comparison with published and unpublished data from other plants. Evaluations on the thermal, economical and environmental performance are repored, based on experiences from the last decade. For detailed designing, a computer simulation...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors...

  17. Large-scale integration of optimal combinations of PV, wind and wave power into the electricity supply

    DEFF Research Database (Denmark)

    Lund, Henrik

    2006-01-01

    ancillary services are needed in order to secure the electricity supply system. The idea is to benefit from the different patterns in the fluctuations of different renewable sources. And the purpose is to identify optimal mixtures from a technical point of view. The optimal mixture seems to be when onshore...... wind power produces approximately 50% of the total electricity production from RES. Meanwhile, the mixture between PV and wave power seems to depend on the total amount of electricity production from RES. When the total RES input is below 20% of demand, PV should cover 40% and wave power only 10%. When...

  18. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  19. Large-Scale Battery System Development and User-Specific Driving Behavior Analysis for Emerging Electric-Drive Vehicles

    Directory of Open Access Journals (Sweden)

    Yihe Sun

    2011-04-01

    Full Text Available Emerging green-energy transportation, such as hybrid electric vehicles (HEVs and plug-in HEVs (PHEVs, has a great potential for reduction of fuel consumption and greenhouse emissions. The lithium-ion battery system used in these vehicles, however, is bulky, expensive and unreliable, and has been the primary roadblock for transportation electrification. Meanwhile, few studies have considered user-specific driving behavior and its significant impact on (PHEV fuel efficiency, battery system lifetime, and the environment. This paper presents a detailed investigation of battery system modeling and real-world user-specific driving behavior analysis for emerging electric-drive vehicles. The proposed model is fast to compute and accurate for analyzing battery system run-time and long-term cycle life with a focus on temperature dependent battery system capacity fading and variation. The proposed solution is validated against physical measurement using real-world user driving studies, and has been adopted to facilitate battery system design and optimization. Using the collected real-world hybrid vehicle and run-time driving data, we have also conducted detailed analytical studies of users’ specific driving patterns and their impacts on hybrid vehicle electric energy and fuel efficiency. This work provides a solid foundation for future energy control with emerging electric-drive applications.

  20. Assessing Impact of Large-Scale Distributed Residential HVAC Control Optimization on Electricity Grid Operation and Renewable Energy Integration

    OpenAIRE

    Corbin, Charles

    2014-01-01

    Demand management is an important component of the emerging Smart Grid, and a potential solution to the supply-demand imbalance occurring increasingly as intermittent renewable electricity is added to the generation mix. Model predictive control (MPC) has shown great promise for controlling HVAC demand in commercial buildings, making it an ideal solution to this problem. MPC is believed to hold similar promise for residential applications, yet very few examples exist in the literature despite...

  1. SOLID-DER. Reaching large-scale integration of Distributed Energy Resources in the enlarged European electricity market

    International Nuclear Information System (INIS)

    Van Oostvoorn, F.; Ten Donkelaar, M.

    2007-05-01

    The integration of DER (distributed energy resources) in the European electricity networks has become a key issue for energy producers, network operators, policy makers and the R and D community. In some countries it created already a number of challenges for the stability of the electricity supply system, thereby creating new barriers for further expansion of the share of DER in supply. On the other hand in many Member States there exists still a lack of awareness and understanding of the possible benefits and role of DER in the electricity system, while environmental goals and security of supply issues ask more and more for solutions that DER could provide in the future. The project SOLID-DER, a Coordination Action, will assess the barriers for further integration of DER, overcome both the lack of awareness of benefits of DER solutions and fragmentation in EU R and D results by consolidating all European DER research activities and report on its common findings. In particular awareness of DER solutions and benefits will be raised in the new Member States, thereby addressing their specific issues and barriers and incorporate them in the existing EU DER R and D community. The SOLID-DER Coordination Action will run from November 2005 to October 2008

  2. Large-scale integration of renewable and distributed generation of electricity in Spain: Current situation and future needs

    International Nuclear Information System (INIS)

    Cossent, Rafael; Gómez, Tomás; Olmos, Luis

    2011-01-01

    Similar to other European countries, mechanisms for the promotion of electricity generation from renewable energy sources (RESs) and combined heat and power (CHP) production have caused a significant growth in distributed generation (DG) in Spain. Low DG/RES penetration levels do not have a major impact on electricity systems. However, several problems arise as DG shares increase. Smarter distribution grids are deemed necessary to facilitate DG/RES integration. This involves modifying the way distribution networks are currently planned and operated. Furthermore, DG and demand should also adopt a more active role. This paper reviews the current situation of DG/RES in Spain including penetration rates, support payments for DG/RES, level of market integration, economic regulation of Distribution System Operators (DSOs), smart metering implementation, grid operation and planning, and incentives for DSO innovation. This paper identifies several improvements that could be made to the treatment of DG/RES. Key aspects of an efficient DG/RES integration are identified and several regulatory changes specific to the Spanish situation are recommended. - Highlights: ► Substantial DG/RES penetration levels are foreseen for the coming years in Spain. ► Integrating such amount of DG/RES in electricity markets and networks is challenging. ► We review key regulatory aspects that may affect DG/RES integration in Spain. ► Several recommendations aimed at easing DG/RES integration in Spain are provided. ► Market integration and the transition towards smarter grids are deemed key issues.

  3. Role of National Support Policy in the large-scale integration of DER into the European electricity market

    DEFF Research Database (Denmark)

    ten Donkelaar, Michael; Klinge Jacobsen, Henrik

    2008-01-01

    This report concerns a study of the DER support schemes in the different EU Member States, their effectiveness and if necessary how these might be moulded to become more cost-effective in the future to integrate much larger shares of DER in the European electricity supply system. The report is part...... of a set of reports on DER integration issues and together they present a full and complete report on key issues of policy support, required changes in regulation and other issues that hamper more DER integration in supply....

  4. Mathematical modelling and optimization of a large-scale combined cooling, heat, and power system that incorporates unit changeover and time-of-use electricity price

    International Nuclear Information System (INIS)

    Zhu, Qiannan; Luo, Xianglong; Zhang, Bingjian; Chen, Ying

    2017-01-01

    Highlights: • We propose a novel superstructure for the design and optimization of LSCCHP. • A multi-objective multi-period MINLP model is formulated. • The unit start-up cost and time-of-use electricity prices are involved. • Unit size discretization strategy is proposed to linearize the original MINLP model. • A case study is elaborated to demonstrate the effectiveness of the proposed method. - Abstract: Building energy systems, particularly large public ones, are major energy consumers and pollutant emission contributors. In this study, a superstructure of large-scale combined cooling, heat, and power system is constructed. The off-design unit, economic cost, and CO 2 emission models are also formulated. Moreover, a multi-objective mixed integer nonlinear programming model is formulated for the simultaneous system synthesis, technology selection, unit sizing, and operation optimization of large-scale combined cooling, heat, and power system. Time-of-use electricity price and unit changeover cost are incorporated into the problem model. The economic objective is to minimize the total annual cost, which comprises the operation and investment costs of large-scale combined cooling, heat, and power system. The environmental objective is to minimize the annual global CO 2 emission of large-scale combined cooling, heat, and power system. The augmented ε–constraint method is applied to achieve the Pareto frontier of the design configuration, thereby reflecting the set of solutions that represent optimal trade-offs between the economic and environmental objectives. Sensitivity analysis is conducted to reflect the impact of natural gas price on the combined cooling, heat, and power system. The synthesis and design of combined cooling, heat, and power system for an airport in China is studied to test the proposed synthesis and design methodology. The Pareto curve of multi-objective optimization shows that the total annual cost varies from 102.53 to 94.59 M

  5. Large scale composting model

    OpenAIRE

    Henon , Florent; Debenest , Gérald; Tremier , Anne; Quintard , Michel; Martel , Jean-Luc; Duchalais , Guy

    2012-01-01

    International audience; One way to treat the organic wastes accordingly to the environmental policies is to develop biological treatment like composting. Nevertheless, this development largely relies on the quality of the final product and as a consequence on the quality of the biological activity during the treatment. Favourable conditions (oxygen concentration, temperature and moisture content) in the waste bed largely contribute to the establishment of a good aerobic biological activity an...

  6. Identifying barriers to large-scale integration of variable renewable electricity into the electricity market : A literature review of market design

    NARCIS (Netherlands)

    Hu, J.|info:eu-repo/dai/nl/412681994; Harmsen, R.|info:eu-repo/dai/nl/195454278; Crijns-Graus, Wina|info:eu-repo/dai/nl/308005015; Worrell, E.|info:eu-repo/dai/nl/106856715; van den Broek, M.A.|info:eu-repo/dai/nl/092946895

    For reaching the 2 °C climate target, the robust growth of electricity generation from variable renewable energy sources (VRE) in the power sector is expected to continue. Accommodation of the power system to the variable, uncertain and locational-dependent outputs of VRE causes integration costs.

  7. Large-scale integration of off-shore wind power and regulation strategies of cogeneration plants in the Danish electricity system

    DEFF Research Database (Denmark)

    Østergaard, Poul Alberg

    2005-01-01

    The article analyses how the amount of a small-scale CHP plants and heat pumps and the regulation strategies of these affect the quantity of off-shore wind power that may be integrated into Danish electricity supply......The article analyses how the amount of a small-scale CHP plants and heat pumps and the regulation strategies of these affect the quantity of off-shore wind power that may be integrated into Danish electricity supply...

  8. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  9. Navigation API Route Fuel Saving Opportunity Assessment on Large-Scale Real-World Travel Data for Conventional Vehicles and Hybrid Electric Vehicles: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Lei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Holden, Jacob [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gonder, Jeffrey D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-12-06

    The green routing strategy instructing a vehicle to select a fuel-efficient route benefits the current transportation system with fuel-saving opportunities. This paper introduces a navigation API route fuel-saving evaluation framework for estimating fuel advantages of alternative API routes based on large-scale, real-world travel data for conventional vehicles (CVs) and hybrid electric vehicles (HEVs). The navigation APIs, such Google Directions API, integrate traffic conditions and provide feasible alternative routes for origin-destination pairs. This paper develops two link-based fuel-consumption models stratified by link-level speed, road grade, and functional class (local/non-local), one for CVs and the other for HEVs. The link-based fuel-consumption models are built by assigning travel from a large number of GPS driving traces to the links in TomTom MultiNet as the underlying road network layer and road grade data from a U.S. Geological Survey elevation data set. Fuel consumption on a link is calculated by the proposed fuel consumption model. This paper envisions two kinds of applications: 1) identifying alternate routes that save fuel, and 2) quantifying the potential fuel savings for large amounts of travel. An experiment based on a large-scale California Household Travel Survey GPS trajectory data set is conducted. The fuel consumption and savings of CVs and HEVs are investigated. At the same time, the trade-off between fuel saving and time saving for choosing different routes is also examined for both powertrains.

  10. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  11. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  12. Electrical efficiency and renewable energy - Economical alternatives to large-scale power generation; Stromeffizienz und erneuerbare Energien - Wirtschaftliche alternative zu Grosskraftwerken

    Energy Technology Data Exchange (ETDEWEB)

    Oettli, B.; Hammer, S.; Moret, F.; Iten, R. [Infras, Zuerich (Switzerland); Nordmann, T. [TNC Consulting AG, Erlenbach (Switzerland)

    2010-05-15

    This final report for WWF Switzerland, Greenpeace Switzerland, the Swiss Energy Foundation SES, Pro Natura and the Swiss Cantons of Basel City and Geneva takes a look at the energy-relevant effects of the propositions made by Swiss electricity utilities for large-scale power generation. These proposals are compared with a strategy that proposes investments in energy-efficiency and the use of renewable sources of energy. The effects of both scenarios on the environment and the risks involved are discussed, as are the investments involved. The associated effects on the Swiss national economy are also discussed. For the efficiency and renewables scenario, two implementation variants are discussed: Inland investments and production are examined as are foreign production options and/or import from foreign countries. The methods used in the study are introduced and discussed. Investment and cost considerations, earnings and effects on employment are also reviewed. The report is completed with an extensive appendix which, amongst other things, includes potential reviews, cost estimates and a discussion on 'smart grids'

  13. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  14. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  15. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  16. Allocation of thermoelectric units in short term in large scale electric power systems; Asignacion de unidades termoelectricas a corto plazo en sistemas electricos de potencia de gran escala

    Energy Technology Data Exchange (ETDEWEB)

    Guillen Moya, Isaias

    1987-08-01

    A method is presented to solve the problem of allocation of thermoelectric units in large scale electric power systems. The problem consists in determining which generating units have to be programmed to enter or to leave the operation during the intervals of the planning horizon in such a way that are satisfied at a minimum cost, and in a reliable form, the foretold demand of electric power and the physical and operative restrictions of the power system components. The method is made up of two stages: the first stage finds a feasible initial solution of thermoelectrical units by means of heuristic methods. The second stage produces a solution from a state of feasible initial allocation. The operation cost is reduced applying dynamic programming in subsequent approaches, in such a way that the product of each interaction constitutes the state of allocation of least cost found until that stage. The of search range for the optimal solution is reduced by applying technical of lagrangean relaxation to select solely the units that have the greater potential to reduce the operation cost. The algorithm is validated using a representative system of the Interconnected National System, that consists of 108 thermoelectrical units grouped in 7 groups of generation, for a planning horizon of one week divided into hourly intervals, containing 18,144 discreet variables, 18,144 continuous variables and 39,024 restrictions. In a VAX 11/780 computer the problem is solved in 55 of CPU minutes with an estimation of the 1.02% of sub-optimality that indicates how close it is of the optimal solution. The main contributions of this thesis are within the short term operation planning of the electric power systems which are: (1) The development of a heuristic-mathematical algorithm to solve the problem of allocation of thermoelectric units in large scale electric power systems, in relatively short execution time. The algorithm efficiently conjugates of lagrangean relaxation technical

  17. Large Scale Glazed Concrete Panels

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    .synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www...... try to develop new aesthetic potentials for the concrete, in large scales that has not been seen before in the ceramic area. It is expected to result in new types of large scale and very thin, glazed concrete façades in building. If such are introduced in an architectural context as exposed surfaces...

  18. Large Scale Coordination of Small Scale Structures

    Science.gov (United States)

    Kobelski, Adam; Tarr, Lucas A.; Jaeggli, Sarah A.; Savage, Sabrina

    2017-08-01

    Transient brightenings are ubiquitous features of the solar atmosphere across many length and energy scales, the most energetic of which manifest as large-class solar flares. Often, transient brightenings originate in regions of strong magnetic activity and create strong observable enhancements across wavelengths from X-ray to radio, with notable dynamics on timescales of seconds to hours.The coronal aspects of these brightenings have often been studied by way of EUV and X-ray imaging and spectra. These events are likely driven by photospheric activity (such as flux emergence) with the coronal brightenings originating largely from chromospheric ablation (evaporation). Until recently, chromospheric and transition region observations of these events have been limited. However, new observational capabilities have become available which significantly enhance our ability to understand the bi-directional flow of energy through the chromosphere between the photosphere and the corona.We have recently obtained a unique data set with which to study this flow of energy through the chromosphere via the Interface Region Imaging Spectrograph (IRIS), Hinode EUV Imaging Spectrometer (EIS), Hinode X-Ray Telescope (XRT), Hinode Solar Optical Telescope (SOT), Solar Dynamics Observatory (SDO) Atmospheric Imaging Assembly (AIA), SDO Helioseismic and Magnetic Imager (HMI), Nuclear Spectroscopic Telescope Array (NuStar), Atacama Large Millimeter Array (ALMA), and Interferometric BIdimensional Spectropolarimeter (IBIS) at the Dunn Solar Telescope (DST). This data set targets a small active area near disk center which was tracked simultaneously for approximately four hours. Within this region, many transient brightenings detected through multiple layers of the solar atmosphere. In this study, we combine the imaging data and use the spectra from EIS and IRIS to track flows from the photosphere (HMI, SOT) through the chromosphere and transition region (AIA, IBIS, IRIS, ALMA) into the corona

  19. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  20. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  1. How does market power affect the impact of large scale wind investment in 'energy only' wholesale electricity markets?

    International Nuclear Information System (INIS)

    Browne, Oliver; Poletti, Stephen; Young, David

    2015-01-01

    In the short run, it is well known that increasing wind penetration is likely to reduce spot market electricity prices due to the merit order effect. The long run effect is less clear because there will be a change in new capacity investment in response to the wind penetration. In this paper we examine the interaction between capacity investment, wind penetration and market power by first using a least-cost generation expansion model to simulate capacity investment with increasing amounts of wind generation, and then using a computer agent-based model to predict electricity prices in the presence of market power. We find the degree to which firms are able to exercise market power depends critically on the ratio of capacity to peak demand. For our preferred long run generation scenario we show market power increases for some periods as wind penetration increases however the merit order counteracts this with the results that prices overall remain flat. Returns to peakers increase significantly as wind penetration increases. The market power in turn leads to inefficient dispatch which is exacerbated with large amounts of wind generation. - Highlights: • Increasing investment in wind generation is analyzed using an agent based model. • In an energy only market, increased total capacity reduces market power. • Increasing wind penetration results in more market power in some periods. • Market power causes dispatch inefficiencies, which grow as wind capacity increases.

  2. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  3. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  4. A large electrically excited synchronous generator

    DEFF Research Database (Denmark)

    2014-01-01

    This invention relates to a large electrically excited synchronous generator (100), comprising a stator (101), and a rotor or rotor coreback (102) comprising an excitation coil (103) generating a magnetic field during use, wherein the rotor or rotor coreback (102) further comprises a plurality...... adjacent neighbouring poles. In this way, a large electrically excited synchronous generator (EESG) is provided that readily enables a relatively large number of poles, compared to a traditional EESG, since the excitation coil in this design provides MMF for all the poles, whereas in a traditional EESG...

  5. Large-scale deployment of electric vehicles in Germany by 2030: An analysis of grid-to-vehicle and vehicle-to-grid concepts

    International Nuclear Information System (INIS)

    Loisel, Rodica; Pasaoglu, Guzay; Thiel, Christian

    2014-01-01

    This study analyses battery electric vehicles (BEVs) in the future German power system and makes projections of the BEVs hourly load profile by car size (‘mini’, ‘small’, ‘compact’ and ‘large’). By means of a power plant dispatching optimisation model, the study assesses the optimal BEV charging/discharging strategies in grid-to-vehicle (G2V) and vehicle-to-grid (V2G) schemes. The results show that the 2% rise in power demand required to power these BEVs does not hamper system stability provided an optimal G2V scheme is applied. Moreover, such BEV deployment can contribute to further integrating wind and solar power generation. Applying a V2G scheme would increase the capacity factors of base and mid-load power plants, leading to a higher integration of intermittent renewables and resulting in a decrease in system costs. However, the evaluation of the profitability of BEVs shows that applying a V2G scheme is not a viable economic option due to the high cost of investing in batteries. Some BEV owners would make modest profits (€6 a year), but a higher number would sustain losses, for reasons of scale. For BEVs to become part of the power system, further incentives are necessary to make the business model attractive to car owners. - Highlights: • Optimal strategies for charging/discharging battery electric vehicles are assessed. • G2V scheme improves the stability of the future German power system. • V2G scheme would increase the capacity factors of base and mid-load power plants. • V2G scheme is not a viable economic option due to high batteries investment cost. • Further incentives are necessary to make the business model attractive to car owners

  6. Large-scale energy consumers pay less

    International Nuclear Information System (INIS)

    Denneman, A.

    2012-01-01

    The price of electricity in the Netherlands rose with 6 percent in the first quarter of 2012, whereas large business consumers are paying less. The natural gas price has risen with about 10 percent in the last year, both for households and for large business consumers. Meanwhile, households are paying twice as much for electricity and gas as large business consumers. [nl

  7. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  8. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  9. Large-Scale Information Systems

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  10. Large scale homing in honeybees.

    Directory of Open Access Journals (Sweden)

    Mario Pahl

    Full Text Available Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama.

  11. Adopting small-scale production of electricity

    Energy Technology Data Exchange (ETDEWEB)

    Tengvard, Maria; Palm, Jenny (Linkoeping Univ., Dept. of Technology and Social Change, Linkoeping (Sweden)). e-mail: maria.tengvard@liu.se

    2009-07-01

    In Sweden in 2008, a 'new' concept for small-scale electricity production attracted massive media attention. This was mainly due to the efforts of Swedish company Egen El, which is marketing small-scale photovoltaics (PVs) and wind turbines to households, both homeowners and tenants. Their main selling point is simplicity: their products are so easy to install that everyone can do it. Autumn 2008 also saw IKEA announce that within three years it would market solar panels. How, then, do households perceive these products? Why would households choose to buy them? How do households think about producing their own electricity? Analysis of material based on in-depth interviews with members of 20 households reveals that environmental concerns supply the main motive for adopting PVs or micro wind power generation. In some cases, the adopting households have an extensively ecological lifestyle and such adoption represents a way to take action in the energy area. For some, this investment is symbolic: a way of displaying environmental consciousness or setting an example to others. For still others, the adoption is a protest against 'the system' with its large dominant actors or is a way to become self-sufficient. These microgeneration installations are rejected mainly on economic grounds; other motives are respect for neighbours and difficulties finding a place to install a wind turbine.

  12. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  13. Construction works of large scale impervious wall in construction of No.2 plant in Onagawa Nuclear Power Station, Tohoku Electric Power Co., Inc

    International Nuclear Information System (INIS)

    Ueda, Kozaburo; Sugeno, Yoshisada; Takahashi, Hitoshi

    1991-01-01

    The main buildings for No. 2 plant in Onagawa Nuclear Power Station are constructed on the bedrocks about 14 m below the sea surface. Therefore, for the purpose of executing the works by shutting seawater off and dry work, the large scale impervious wall of about 500 m extension was installed underground. The feature of this impervious wall is the depth of embedment of about 3 m into the hard bedrocks having the uniaxial compressive strength of 2000 kg/cm 2 at maximum, carried out with the newly developed hard rock excavator. The outline of these construction works is reported. No. 2 plant in Onagawa Nuclear Power Station is the BWR plant of 825 MWe output. The construction works of the power station were began in August, 1989, and the rate of progress in civil engineering works as of the end of September, 1990 was 21.3%. The planning of the impervious wall, the geological features at the site, the method of shutting seawater off, the selection of wall materials, the design of the wall body, the investigation of the quantity of spring water, the execution of the construction and execution management, and the confirmation of the effect of the wall are reported. (K.I.)

  14. Large-scale depositional characteristics of the Ulleung Basin and its impact on electrical resistivity and Archie-parameters for gas hydrate saturation estimates

    Science.gov (United States)

    Riedel, Michael; Collett, Timothy S.; Kim, H.-S.; Bahk, J.-J.; Kim, J.-H.; Ryu, B.-J.; Kim, G.-Y.

    2013-01-01

    Gas hydrate saturation estimates were obtained from an Archie-analysis of the Logging-While-Drilling (LWD) electrical resistivity logs under consideration of the regional geological framework of sediment deposition in the Ulleung Basin, East Sea, of Korea. Porosity was determined from the LWD bulk density log and core-derived values of grain density. In situ measurements of pore-fluid salinity as well as formation temperature define a background trend for pore-fluid resistivity at each drill site. The LWD data were used to define sets of empirical Archie-constants for different depth-intervals of the logged borehole at all sites drilled during the second Ulleung Basin Gas Hydrate Drilling Expedition (UBGH2). A clustering of data with distinctly different trend-lines is evident in the cross-plot of porosity and formation factor for all sites drilled during UBGH2. The reason for the clustering is related to the difference between hemipelagic sediments (mostly covering the top ∼100 mbsf) and mass-transport deposits (MTD) and/or the occurrence of biogenic opal. For sites located in the north-eastern portion of the Ulleung Basin a set of individual Archie-parameters for a shallow depth interval (hemipelagic) and a deeper MTD zone was achieved. The deeper zone shows typically higher resistivities for the same range of porosities seen in the upper zone, reflecting a shift in sediment properties. The presence of large amounts of biogenic opal (up to and often over 50% as defined by XRD data) was especially observed at Sites UBGH2-2_1 and UBGH2-2_2 (as well as UBGH1-9 from a previous drilling expedition in 2007). The boundary between these two zones can also easily be identified in gamma-ray logs, which also show unusually low readings in the opal-rich interval. Only by incorporating different Archie-parameters for the different zones a reasonable estimate of gas hydrate saturation was achieved that also matches results from other techniques such as pore-fluid freshening

  15. Potential large scale generation of electricity with wind energy in Mexico; Potencial de generacion electrica en gran escala con energia eolica en Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Caldera Munoz, Enrique

    1997-12-31

    The technology of wind-electric stations for the conversion of wind-power into electricity massively generated and interconnected to the National electric systems is nowadays a mature technology. By the end of 1995 the world had an installed capacity in operation of 5,000 MW and what had been planned for year 2000 was to complete an installed capacity in operation 13,800 MW, in accordance with a Riso National Laboratory of Denmark study of 1995. This document describes the current status of this technology in Mexico, as well as the historic evolution of the electric sector [Espanol] La tecnologia de centrales eoloelectricas para la conversion de energia eolica a electricidad, generada masivamente e interconectada a los sistemas electricos nacionales, es en la actualidad una tecnologia madura. A finales de 1995 en el mundo habia una capacidad instalada en operacion de 5,000 MW y lo que se habia planificado para el ano 2000 eran completar en operacion 13,800 MW, segun encuesta realizada por Riso National Laboratory de Dinamarca en 1995. Este documento describe el estado actual de esta tecnologia en Mexico, asi como la evolucion historica del sector electrico

  16. Large-scale solar heating

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Advanced Energy Systems

    1998-10-01

    Solar heating market is growing in many European countries and annually installed collector area has exceeded one million square meters. There are dozens of collector manufacturers and hundreds of firms making solar heating installations in Europe. One tendency in solar heating is towards larger systems. These can be roof integrated, consisting of some tens or hundreds of square meters of collectors, or they can be larger centralized solar district heating plants consisting of a few thousand square meters of collectors. The increase of size can reduce the specific investments of solar heating systems, because e.g. the costs of some components (controllers, pumps, and pipes), planning and installation can be smaller in larger systems. The solar heat output can also be higher in large systems, because more advanced technique is economically viable

  17. A modeling and control framework for operating large-scale electric power systems under present and newly evolving competitive industry structures

    Directory of Open Access Journals (Sweden)

    Marija D. Ilić

    1995-01-01

    Full Text Available This paper introduces a systematic, structure-based modeling framework for analysis and control of electric power systems for processes evolving over the mid-term and long-term time horizons. Much simpler models than the detailed dynamics specifically for control design at different hierarchical levels are obtained by applying both temporal and spatial separation. These simple models, or the aggregate models, represent the net effect of interactions among interconnected regions on specific hierarchical levels. They are exact, since no assumptions on weak interconnections among the subsystems are made. Moreover they are easily understood in terms of power flows among the regions. The approach is essential for improving present performance of the system. It is also potentially useful in a competitive utility environment in which it is critical to study the interplay between technical and economic processes.

  18. Parallel time domain solvers for electrically large transient scattering problems

    KAUST Repository

    Liu, Yang

    2014-09-26

    Marching on in time (MOT)-based integral equation solvers represent an increasingly appealing avenue for analyzing transient electromagnetic interactions with large and complex structures. MOT integral equation solvers for analyzing electromagnetic scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary to finite difference and element competitors, these solvers apply to nonlinear and multi-scale structures comprising geometrically intricate and deep sub-wavelength features residing atop electrically large platforms. Moreover, they are high-order accurate, stable in the low- and high-frequency limits, and applicable to conducting and penetrable structures represented by highly irregular meshes. This presentation reviews some recent advances in the parallel implementations of time domain integral equation solvers, specifically those that leverage multilevel plane-wave time-domain algorithm (PWTD) on modern manycore computer architectures including graphics processing units (GPUs) and distributed memory supercomputers. The GPU-based implementation achieves at least one order of magnitude speedups compared to serial implementations while the distributed parallel implementation are highly scalable to thousands of compute-nodes. A distributed parallel PWTD kernel has been adopted to solve time domain surface/volume integral equations (TDSIE/TDVIE) for analyzing transient scattering from large and complex-shaped perfectly electrically conducting (PEC)/dielectric objects involving ten million/tens of millions of spatial unknowns.

  19. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  20. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  1. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  2. Tsuwalhkalh Ti Tmicwa (The Land is Ours): Stat'imc Self-Determination in the Face of Large-Scale Hydro-electric Development

    Science.gov (United States)

    Moritz, Sarah Carmen

    In Canada, First Nations asserting authority over their lands are developing diverse strategies to overcome the state''s dogmatic insistence on jurisdictional sovereignty. This movement corresponds to the wider context of the challenges faced by indigenous people to use their own ways of knowing to resist or reformulate legal doctrines and political tenets based on colonial power. Interior Salish Stat'imc people identify themselves through a strong and ongoing social relationship with Sataqwa7, the Fraser River, and the "Valley of Plenty"---now known as the flooded Bridge River Valley---maintained through Stat'imc knowledge and cultural practice and demonstrated by talk of '"the Stat'imc right to fish" and Tsuwalhkalh Ti Tmicwa (The Land is Ours). Stat'imc fishers are prepared to contest and resist any regulatory system that is understood to impact this right to fish while they advocate their own ways of sustainable fishing and water management. Based on ethnographic research in collaboration with Stat'imc people, this thesis explores some of these often successful contestations especially in the context of increasing territorial governance and by example of the rapidly transforming relationship between Stat'imc, BC Hydro and the Province of BC. Interior Salish Stat'imc people are currently navigating through a significant phase of increasing jurisdiction and authority and recognition of (unsettled) territorial property relationships. This very dynamic process is marked by strategic collaborations, compensation for 'infringements' on St'a'imc Title and Rights, and conservation efforts to protect their home. An important example is the changing relationship between Stat'imc people and BC Hydro---a relationship between two groups with radically different cultures and agendas: Stat'imc people in a struggle for self-determination, social justice and cultural survival and BC Hydro, a corporate culture, with the agenda to provide hydro-electric power to BC, maintain

  3. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  4. Large-scale management of electric grids - Power flux control in an electric network; Gestion des reseaux electriques a grande echelle. Controle des flux de puissance d'un reseau electrique

    Energy Technology Data Exchange (ETDEWEB)

    Lalou, M. J. [EIA-FR, Fribourg (Switzerland); Affolter, J.-F. [HEIG-VD, Yverdon-les-Bains (Switzerland)

    2010-07-01

    Security of supply in electricity networks is a major industrial challenge for the future. It involves power generation, transmission and distribution altogether. Networks gradually approach saturation. This article presents an optimal real-time management method of power flux in an electricity network equipped with FACTS and phase measuring devices. FACTS have to be gradually introduced, however not earlier than in 10 years time, as the reliability of data transmission for their control is currently not high enough, considering the order of magnitude of the involved electric power.

  5. SDI Large-Scale System Technology Study

    National Research Council Canada - National Science Library

    1986-01-01

    .... This coordination is addressed by the Battle Management function. The algorithms and technologies required to support Battle Management are the subject of the SDC Large Scale Systems Technology Study...

  6. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such ...

  7. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  8. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  9. Large-scale nanophotonic phased array.

    Science.gov (United States)

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  10. Large scale processing of dielectric electroactive polymers

    DEFF Research Database (Denmark)

    Vudayagiri, Sindhu

    Efficient processing techniques are vital to the success of any manufacturing industry. The processing techniques determine the quality of the products and thus to a large extent the performance and reliability of the products that are manufactured. The dielectric electroactive polymer (DEAP......) technology is relatively new and is in the initial stages of development with no established large scale manufacturing techniques. Danfoss Polypower A/S has set up a large scale manufacture process to make thin film DEAP transducers. The DEAP transducers developed by Danfoss Polypower consist...... of microstructured elastomer surfaces on which the compliant metallic electrodes are sputtered thus enabling large strains of non-stretchable metal electrode. Thin microstructured polydimethlysiloxane (PDMS) films are quintessential in DEAP technology due to scaling of their actuation strain with the reciprocal...

  11. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  12. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  13. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  14. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  15. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  16. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  17. Primordial large-scale electromagnetic fields from gravitoelectromagnetic inflation

    International Nuclear Information System (INIS)

    Membiela, Federico Agustin; Bellini, Mauricio

    2009-01-01

    We investigate the origin and evolution of primordial electric and magnetic fields in the early universe, when the expansion is governed by a cosmological constant Λ 0 . Using the gravitoelectromagnetic inflationary formalism with A 0 =0, we obtain the power of spectrums for large-scale magnetic fields and the inflaton field fluctuations during inflation. A very important fact is that our formalism is naturally non-conformally invariant.

  18. Large Scale Demand Response of Thermostatic Loads

    DEFF Research Database (Denmark)

    Totu, Luminita Cristiana

    This study is concerned with large populations of residential thermostatic loads (e.g. refrigerators, air conditioning or heat pumps). The purpose is to gain control over the aggregate power consumption in order to provide balancing services for the electrical grid. Without affecting the temperat......This study is concerned with large populations of residential thermostatic loads (e.g. refrigerators, air conditioning or heat pumps). The purpose is to gain control over the aggregate power consumption in order to provide balancing services for the electrical grid. Without affecting...... the temperature limits and other operational constraints, and by using only limited communication, it is possible to make use of the individual thermostat deadband flexibility to step-up or step-down the power consumption of the population as if it were a power plant. The individual thermostatic loads experience...... no loss of service or quality, and the electrical grid gains a fast power resource of hundreds of MW or more. This study proposes and analysis a mechanism that introduces random on/off and off/on switches in the normal thermostat operation of the units. This mechanism is called Switching Actuation...

  19. Large scale dynamics of protoplanetary discs

    Science.gov (United States)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  20. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  1. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  2. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  3. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  4. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  5. Adaptive visualization for large-scale graph

    International Nuclear Information System (INIS)

    Nakamura, Hiroko; Shinano, Yuji; Ohzahata, Satoshi

    2010-01-01

    We propose an adoptive visualization technique for representing a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links showing the parent-child relationship between the nodes. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data because many primitives are overlapped within a limited region. To overcome this difficulty, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style according to the nodal density in each area. (author)

  6. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  7. Fractals and cosmological large-scale structure

    Science.gov (United States)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  8. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...... main focus. Here the general perception of the nature and role in society of large scale networks as a fundamental infrastructure is analysed. This analysis focuses on the effects of the technical DDN projects and on the perception of network infrastructure as expressed by key decision makers...

  9. Analysis using large-scale ringing data

    OpenAIRE

    Baillie, S. R.; Doherty, P. F.

    2004-01-01

    Birds are highly mobile organisms and there is increasing evidence that studies at large spatial scales are needed if we are to properly understand their population dynamics. While classical metapopulation models have rarely proved useful for birds, more general metapopulation ideas involving collections of populations interacting within spatially structured landscapes are highly relevant (Harrison, 1994). There is increasing interest in understanding patterns of synchrony, or lack of synchro...

  10. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  11. Large-scale computer-aided design

    OpenAIRE

    Adeli, Hojjat

    1997-01-01

    The author and his associates have been 'working on creating novel design theories and computational models with two broad objectives: automation and optimization. This paper is a summary of the author's Keynote Lecture based on the research done by the author and his associates recently. Novel neurocomputing algorithms are presented for large-scale computer-aided design and optimization. This research demonstrates how a new level is achieved in design automation through the ingenious use and...

  12. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  13. Large scale inhomogeneities and the cosmological principle

    International Nuclear Information System (INIS)

    Lukacs, B.; Meszaros, A.

    1984-12-01

    The compatibility of cosmologic principles and possible large scale inhomogeneities of the Universe is discussed. It seems that the strongest symmetry principle which is still compatible with reasonable inhomogeneities, is a full conformal symmetry in the 3-space defined by the cosmological velocity field, but even in such a case, the standard model is isolated from the inhomogeneous ones when the whole evolution is considered. (author)

  14. The consistency problems of large scale structure

    International Nuclear Information System (INIS)

    Schramm, D.N.

    1986-01-01

    Studies of the early universe are reviewed, with emphasis on galaxy formation, dark matter and the generation of large scale structure. The paper was presented at the conference on ''The early universe and its evolution'', Erice, Italy, 1986. Dark matter, Big Bang nucleosynthesis, baryonic halos, flatness arguments, cosmological constant, galaxy formation, neutrinos plus strings or explosions and string models, are all discussed. (U.K.)

  15. Large-Scale Visual Data Analysis

    Science.gov (United States)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  16. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  17. Large scale structure statistics: Finite volume effects

    Science.gov (United States)

    Colombi, S.; Bouchet, F. R.; Schaeffer, R.

    1994-01-01

    We study finite volume effects on the count probability distribution function PN(l) and the averaged Q-body correlations Xi-barQ (2 less than or = Q less than or equal 5). These statistics are computed for cubic cells, of size l. We use as an example the case of the matter distribution of a cold dark matter (CDM) universe involving approximately 3 x 105 particles. The main effect of the finiteness of the sampled volume is to induce an abrupt cut-off on the function PN(l) at large N. This clear signature makes an analysis of the consequences easy, and one can envisage a correction procedure. As a matter of fact, we demonstrate how an unfair sample can strongly affect the estimates of the functions Xi-barQ for Q greater than or = 3 (and decrease the measured zero of the two-body correlation function). We propose a method to correct for this are fact, or at least to evaluate the corresponding errors. We show that the correlations are systematically underestimated by direct measurements. We find that, once corrected, the statistical properties of the CDM universe appear compatible with the scaling relation SQ identically equals Xi-bar2 exp Q-1 = constant with respect to scale, in the non-linear regime; it was not the case with direct measurments. However, we note a deviation from scaling at scales close to the correlation length. It is probably due to the transition between the highly non-linear regime and the weakly correlated regime, where the functions SQ also seem to present a plateau. We apply the same procedure to simulations with hot dark matter (HDM) and white noise initial conditions, with similar results. Our method thus provides the first accurate measurement of the normalized skewness, S3, and the normalized kurtosis, S4, for three typical models of large scale structure formation in an expanding universe.

  18. Abnormally large magnetospheric electric field on 9 November 2004 ...

    Indian Academy of Sciences (India)

    There was a solar event around 1850 UT on 9th November 2004, associated with an abnormally large solar wind flow pressure and large southward interplanetary magnetic field, causing an abnormally large prompt penetration electric field between 1850 and 2100 UT. Abnormally large vertical F-region drifts by Jicamarca ...

  19. The Cosmology Large Angular Scale Surveyor

    Science.gov (United States)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John; Bennett, Charles; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  20. The Cosmology Large Angular Scale Surveyor (CLASS)

    Science.gov (United States)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  1. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  2. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  3. Large-scale impacts of hydroelectric development

    International Nuclear Information System (INIS)

    Rosenberg, D.M.; Bodaly, R.A.; Hecky, R.E.; Rudd, J.W.M.; Berkes, F.; Kelly, C.A.

    1997-01-01

    A study was conducted in which the cumulative environmental effects of mega-hydroelectric development projects such as the James Bay development in Canada, the Sardar Sarovar development in India and the Three Gorges development in China were examined. The extent of flooding as a result of these projects and of many others around the world was presented. The study showed that several factors are responsible for methyl mercury (MeHg) bioaccumulation in reservoirs. The study also revealed that reservoirs can be a significant source of greenhouse gas emissions. Boreal forests in particular, when flooded, become a strong source of greenhouse gases to the atmosphere. This results from the fact that after flooding a boreal forest changes from being a small carbon sink to a large source of carbon to the atmosphere, due to stimulated microbial production of CO 2 and CH 4 by decomposition of plant tissues and peat. This increased decomposition also results in an increase of another microbial activity, namely the methylation of inorganic mercury to the much more toxic MeHg. Selected examples of the downstream effects of altered flows caused by large-scale hydroelectric developments world-wide were summarized. A similar tabulation provided examples of social impacts of relocation of people necessitated by large-scale hydroelectric development. 209 refs., 10 tabs., 3 figs

  4. Large Scale Landform Mapping Using Lidar DEM

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-08-01

    Full Text Available In this study, LIDAR DEM data was used to obtain a primary landform map in accordance with a well-known methodology. This primary landform map was generalized using the Focal Statistics tool (Majority, considering the minimum area condition in cartographic generalization in order to obtain landform maps at 1:1000 and 1:5000 scales. Both the primary and the generalized landform maps were verified visually with hillshaded DEM and an orthophoto. As a result, these maps provide satisfactory visuals of the landforms. In order to show the effect of generalization, the area of each landform in both the primary and the generalized maps was computed. Consequently, landform maps at large scales could be obtained with the proposed methodology, including generalization using LIDAR DEM.

  5. Modelling large-scale hydrogen infrastructure development

    International Nuclear Information System (INIS)

    De Groot, A.; Smit, R.; Weeda, M.

    2005-08-01

    In modelling a possible H2 infrastructure development the following questions are answered in this presentation: How could the future demand for H2 develop in the Netherlands?; and In which year and where would it be economically viable to construct a H2 infrastructure in the Netherlands? Conclusions are that: A model for describing a possible future H2 infrastructure is successfully developed; The model is strongly regional and time dependent; Decrease of fuel cell cost appears to be a sensitive parameter for development of H2 demand; Cost-margin between large-scale and small-scale H2 production is a main driver for development of a H2 infrastructure; A H2 infrastructure seems economically viable in the Netherlands starting from the year 2022

  6. Large scale PV plants - also in Denmark. Project report

    Energy Technology Data Exchange (ETDEWEB)

    Ahm, P. (PA Energy, Malling (Denmark)); Vedde, J. (SiCon. Silicon and PV consulting, Birkeroed (Denmark))

    2011-04-15

    Large scale PV (LPV) plants, plants with a capacity of more than 200 kW, has since 2007 constituted an increasing share of the global PV installations. In 2009 large scale PV plants with cumulative power more that 1,3 GWp were connected to the grid. The necessary design data for LPV plants in Denmark are available or can be found, although irradiance data could be improved. There seems to be very few institutional barriers for LPV projects, but as so far no real LPV projects have been processed, these findings have to be regarded as preliminary. The fast growing number of very large scale solar thermal plants for district heating applications supports these findings. It has further been investigated, how to optimize the lay-out of LPV plants. Under the Danish irradiance conditions with several winter months with very low solar height PV installations on flat surfaces will have to balance the requirements of physical space - and cost, and the loss of electricity production due to shadowing effects. The potential for LPV plants in Denmark are found in three main categories: PV installations on flat roof of large commercial buildings, PV installations on other large scale infrastructure such as noise barriers and ground mounted PV installations. The technical potential for all three categories is found to be significant and in the range of 50 - 250 km2. In terms of energy harvest PV plants will under Danish conditions exhibit an overall efficiency of about 10 % in conversion of the energy content of the light compared to about 0,3 % for biomass. The theoretical ground area needed to produce the present annual electricity consumption of Denmark at 33-35 TWh is about 300 km2 The Danish grid codes and the electricity safety regulations mention very little about PV and nothing about LPV plants. It is expected that LPV plants will be treated similarly to big wind turbines. A number of LPV plant scenarios have been investigated in detail based on real commercial offers and

  7. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    ) and with majority of images having a proportion larger than one, but less than e.g. the golden ratio. Furthermore, more images have the inversed proportion, meaning that portrait paintings are more common than landscape paintings. The inverse is true for photographs, i.e. more landscape than portrait format......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  8. Large scale phononic metamaterials for seismic isolation

    Energy Technology Data Exchange (ETDEWEB)

    Aravantinos-Zafiris, N. [Department of Sound and Musical Instruments Technology, Ionian Islands Technological Educational Institute, Stylianou Typaldou ave., Lixouri 28200 (Greece); Sigalas, M. M. [Department of Materials Science, University of Patras, Patras 26504 (Greece)

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  9. Large scale phononic metamaterials for seismic isolation

    International Nuclear Information System (INIS)

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-01-01

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials

  10. Large scale study of tooth enamel

    International Nuclear Information System (INIS)

    Bodart, F.; Deconninck, G.; Martin, M.T.

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. On hundred eighty samples of teeth were first analyzed using PIXE, backscattering and nuclear reaction techniques. The results were analyzed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population. (author)

  11. Large-scale ATLAS production on EGEE

    CERN Document Server

    Espinal, X; Walker, R

    2008-01-01

    In preparation for first data at the LHC, a series of Data Challenges, of increasing scale and complexity, have been performed. Large quantities of simulated data have been produced on three different Grids, integrated into the ATLAS production system. During 2006, the emphasis moved towards providing stable continuous production, as is required in the immediate run-up to first data, and thereafter. Here, we discuss the experience of the production done on EGEE resources, using submission based on the gLite WMS, CondorG and a system using Condor Glide-ins. The overall wall time efficiency of around 90% is largely independent of the submission method, and the dominant source of wasted cpu comes from data handling issues. The efficiency of grid job submission is significantly worse than this, and the glide-in method benefits greatly from factorising this out.

  12. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    Full Text Available Birds are highly mobile organisms and there is increasing evidence that studies at large spatial scales are needed if we are to properly understand their population dynamics. While classical metapopulation models have rarely proved useful for birds, more general metapopulation ideas involving collections of populations interacting within spatially structured landscapes are highly relevant (Harrison, 1994. There is increasing interest in understanding patterns of synchrony, or lack of synchrony, between populations and the environmental and dispersal mechanisms that bring about these patterns (Paradis et al., 2000. To investigate these processes we need to measure abundance, demographic rates and dispersal at large spatial scales, in addition to gathering data on relevant environmental variables. There is an increasing realisation that conservation needs to address rapid declines of common and widespread species (they will not remain so if such trends continue as well as the management of small populations that are at risk of extinction. While the knowledge needed to support the management of small populations can often be obtained from intensive studies in a few restricted areas, conservation of widespread species often requires information on population trends and processes measured at regional, national and continental scales (Baillie, 2001. While management prescriptions for widespread populations may initially be developed from a small number of local studies or experiments, there is an increasing need to understand how such results will scale up when applied across wider areas. There is also a vital role for monitoring at large spatial scales both in identifying such population declines and in assessing population recovery. Gathering data on avian abundance and demography at large spatial scales usually relies on the efforts of large numbers of skilled volunteers. Volunteer studies based on ringing (for example Constant Effort Sites [CES

  13. Arduino-Based Small Scale Electric Brewing System

    OpenAIRE

    Farineau, Matthew

    2015-01-01

    The goal of this project is to create a small-scale, low cost, electric home brewing system that allows a user to more easily brew large (5 gallon) batches of beer in an enclosed space. This is accomplished by using an Arduino microcontroller in conjunction with a Yun WiFi shield to host a local website which allows a user to enter a temperature into the system via their phone, tablet, or computer. This data is then passed from a website running on the Yun shield to the Arduino sketch which r...

  14. The Cosmology Large Angular Scale Surveyor (CLASS)

    Science.gov (United States)

    Cleary, Joseph

    2018-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an array of four telescopes designed to measure the polarization of the Cosmic Microwave Background. CLASS aims to detect the B-mode polarization from primordial gravitational waves predicted by cosmic inflation theory, as well as the imprint left by reionization upon the CMB E-mode polarization. This will be achieved through a combination of observing strategy and state-of-the-art instrumentation. CLASS is observing 70% of the sky to characterize the CMB at large angular scales, which will measure the entire CMB power spectrum from the reionization peak to the recombination peak. The four telescopes operate at frequencies of 38, 93, 145, and 217 GHz, in order to estimate Galactic synchrotron and dust foregrounds while avoiding atmospheric absorption. CLASS employs rapid polarization modulation to overcome atmospheric and instrumental noise. Polarization sensitive cryogenic detectors with low noise levels provide CLASS the sensitivity required to constrain the tensor-to-scalar ratio down to levels of r ~ 0.01 while also measuring the optical depth the reionization to sample-variance levels. These improved constraints on the optical depth to reionization are required to pin down the mass of neutrinos from complementary cosmological data. CLASS has completed a year of observations at 38 GHz and is in the process of deploying the rest of the telescope array. This poster provides an overview and update on the CLASS science, hardware and survey operations.

  15. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  16. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  17. Electric field scales at quasi-perpendicular shocks

    Directory of Open Access Journals (Sweden)

    S. N. Walker

    2004-07-01

    Full Text Available This paper investigates the short scale structures that are observed in the electric field during crossings of the quasi-perpendicular bow shock using data from the Cluster satellites. These structures exhibit large amplitudes, as high as 70 m Vm-1 and so make a significant contribution to the overall change in potential at the shock front. It is shown that the scale size of these short-lived electric field structures is of the order of a few cpe. The relationships between the scale size and the upstream Mach number and θBn are studied. It is found that the scale size of these structures decreases with increasing plasma β and as θBn→90°. The amplitude of the spikes remains fairly constant with increasing Ma and appears to increase as θBn→90°.

  18. Introducing Large-Scale Innovation in Schools

    Science.gov (United States)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  19. Tidal power plant may develop into large-scale industry

    International Nuclear Information System (INIS)

    2001-01-01

    Hammerfest was the first city in Norway with hydroelectric power production and the first city in Northern Europe to have electric street lights. Recently, technologists within the city's electricity supply industry have suggested that Hammerfest should pioneer the field of tidal energy. The idea is to create a new Norwegian large-scale industry. The technology is being developed by the company Hammerfest Stroem. A complete plant is planned to be installed in Kvalsundet. It will include turbine, generator, converters, transmission to land and delivery to the network. Once fully developed, in 2004, the plant will be sold. The company expects to install similar plants elsewhere in Norway and abroad. It is calculated that for a tidewater current of 2.5 m/s, the worldwide potential is about 450 TWh

  20. Optimal Selection of AC Cables for Large Scale Offshore Wind Farms

    DEFF Research Database (Denmark)

    Hou, Peng; Hu, Weihao; Chen, Zhe

    2014-01-01

    The investment of large scale offshore wind farms is high in which the electrical system has a significant contribution to the total cost. As one of the key components, the cost of the connection cables affects the initial investment a lot. The development of cable manufacturing provides a vast...... and systematical way for the optimal selection of cables in large scale offshore wind farms....

  1. Radiations: large scale monitoring in Japan

    International Nuclear Information System (INIS)

    Linton, M.; Khalatbari, A.

    2011-01-01

    As the consequences of radioactive leaks on their health are a matter of concern for Japanese people, a large scale epidemiological study has been launched by the Fukushima medical university. It concerns the two millions inhabitants of the Fukushima Prefecture. On the national level and with the support of public funds, medical care and follow-up, as well as systematic controls are foreseen, notably to check the thyroid of 360.000 young people less than 18 year old and of 20.000 pregnant women in the Fukushima Prefecture. Some measurements have already been performed on young children. Despite the sometimes rather low measures, and because they know that some parts of the area are at least as much contaminated as it was the case around Chernobyl, some people are reluctant to go back home

  2. [Stress management in large-scale establishments].

    Science.gov (United States)

    Fukasawa, Kenji

    2002-07-01

    Due to a recent dramatic change in industrial structures in Japan, the role of large-scale enterprises is changing. Mass production used to be the major income sources of companies, but nowadays it has changed to high value-added products, including, software development. As a consequence of highly competitive inter-corporate development, there are various sources of job stress which induce health problems in employees, especially those concerned with development or management. To simply to obey the law or offer medical care are not enough to achieve management of these problems. Occupational health staff need to act according to the disease type and provide care with support from the Supervisor and Personnel Division. And for the training, development and consultation system, occupational health staff must work with the Personnel Division and Safety Division, and be approved by management supervisors.

  3. Large - scale Rectangular Ruler Automated Verification Device

    Science.gov (United States)

    Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie

    2018-03-01

    This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.

  4. Engineering management of large scale systems

    Science.gov (United States)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  5. Large-scale digitizer system, analog converters

    International Nuclear Information System (INIS)

    Althaus, R.F.; Lee, K.L.; Kirsten, F.A.; Wagner, L.J.

    1976-10-01

    Analog to digital converter circuits that are based on the sharing of common resources, including those which are critical to the linearity and stability of the individual channels, are described. Simplicity of circuit composition is valued over other more costly approaches. These are intended to be applied in a large-scale processing and digitizing system for use with high-energy physics detectors such as drift-chambers or phototube-scintillator arrays. Signal distribution techniques are of paramount importance in maintaining adequate signal-to-noise ratio. Noise in both amplitude and time-jitter senses is held sufficiently low so that conversions with 10-bit charge resolution and 12-bit time resolution are achieved

  6. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  7. HTS cables open the window for large-scale renewables

    International Nuclear Information System (INIS)

    Geschiere, A; Willen, D; Piga, E; Barendregt, P

    2008-01-01

    In a realistic approach to future energy consumption, the effects of sustainable power sources and the effects of growing welfare with increased use of electricity need to be considered. These factors lead to an increased transfer of electric energy over the networks. A dominant part of the energy need will come from expanded large-scale renewable sources. To use them efficiently over Europe, large energy transits between different countries are required. Bottlenecks in the existing infrastructure will be avoided by strengthening the network. For environmental reasons more infrastructure will be built underground. Nuon is studying the HTS technology as a component to solve these challenges. This technology offers a tremendously large power transport capacity as well as the possibility to reduce short circuit currents, making integration of renewables easier. Furthermore, power transport will be possible at lower voltage levels, giving the opportunity to upgrade the existing network while re-using it. This will result in large cost savings while reaching the future energy challenges. In a 6 km backbone structure in Amsterdam Nuon wants to install a 50 kV HTS Triax cable for a significant increase of the transport capacity, while developing its capabilities. Nevertheless several barriers have to be overcome

  8. Large-scale stochasticity in Hamiltonian systems

    International Nuclear Information System (INIS)

    Escande, D.F.

    1982-01-01

    Large scale stochasticity (L.S.S.) in Hamiltonian systems is defined on the paradigm Hamiltonian H(v,x,t) =v 2 /2-M cos x-P cos k(x-t) which describes the motion of one particle in two electrostatic waves. A renormalization transformation Tsub(r) is described which acts as a microscope that focusses on a given KAM (Kolmogorov-Arnold-Moser) torus in phase space. Though approximate, Tsub(r) yields the threshold of L.S.S. in H with an error of 5-10%. The universal behaviour of KAM tori is predicted: for instance the scale invariance of KAM tori and the critical exponent of the Lyapunov exponent of Cantori. The Fourier expansion of KAM tori is computed and several conjectures by L. Kadanoff and S. Shenker are proved. Chirikov's standard mapping for stochastic layers is derived in a simpler way and the width of the layers is computed. A simpler renormalization scheme for these layers is defined. A Mathieu equation for describing the stability of a discrete family of cycles is derived. When combined with Tsub(r), it allows to prove the link between KAM tori and nearby cycles, conjectured by J. Greene and, in particular, to compute the mean residue of a torus. The fractal diagrams defined by G. Schmidt are computed. A sketch of a methodology for computing the L.S.S. threshold in any two-degree-of-freedom Hamiltonian system is given. (Auth.)

  9. CLASS: The Cosmology Large Angular Scale Surveyor

    Science.gov (United States)

    Essinger-Hileman, Thomas; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T.; hide

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravitational wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low-length. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of r = 0:01 and make a cosmic-variance-limited measurement of the optical depth to the surface of last scattering, tau. (c) (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  10. Large scale molecular dynamics simulations of nuclear pasta

    Science.gov (United States)

    Horowitz, C. J.; Berry, D.; Briggs, C.; Chapman, M.; Clark, E.; Schneider, A.

    2014-09-01

    We report large-scale molecular dynamics simulations of nuclear pasta using from 50,000 to more than 3,000,000 nucleons. We use a simple phenomenological two-nucleon potential that reproduces nuclear saturation. We find a complex ``nuclear waffle'' phase in addition to more conventional rod, plate, and sphere phases. We also find long-lived topological defects involving screw like dislocations that may reduce the electrical conductivity and thermal conductivity of lasagna phases. From MD trajectories we calculate a variety of quantities including static structure factor, dynamical response function, shear modulus and breaking strain. We report large-scale molecular dynamics simulations of nuclear pasta using from 50,000 to more than 3,000,000 nucleons. We use a simple phenomenological two-nucleon potential that reproduces nuclear saturation. We find a complex ``nuclear waffle'' phase in addition to more conventional rod, plate, and sphere phases. We also find long-lived topological defects involving screw like dislocations that may reduce the electrical conductivity and thermal conductivity of lasagna phases. From MD trajectories we calculate a variety of quantities including static structure factor, dynamical response function, shear modulus and breaking strain. Supported in parts by DOE Grants No. DE-FG02-87ER40365 (Indiana University) and No. DE-SC0008808 (NUCLEI SciDAC Collaboration).

  11. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  12. Future hydrogen markets for large-scale hydrogen production systems

    International Nuclear Information System (INIS)

    Forsberg, Charles W.

    2007-01-01

    The cost of delivered hydrogen includes production, storage, and distribution. For equal production costs, large users (>10 6 m 3 /day) will favor high-volume centralized hydrogen production technologies to avoid collection costs for hydrogen from widely distributed sources. Potential hydrogen markets were examined to identify and characterize those markets that will favor large-scale hydrogen production technologies. The two high-volume centralized hydrogen production technologies are nuclear energy and fossil energy with carbon dioxide sequestration. The potential markets for these technologies are: (1) production of liquid fuels (gasoline, diesel and jet) including liquid fuels with no net greenhouse gas emissions and (2) peak electricity production. The development of high-volume centralized hydrogen production technologies requires an understanding of the markets to (1) define hydrogen production requirements (purity, pressure, volumes, need for co-product oxygen, etc.); (2) define and develop technologies to use the hydrogen, and (3) create the industrial partnerships to commercialize such technologies. (author)

  13. Large-scale fuel cycle centres

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The US Nuclear Regulatory Commission (NRC) has considered the nuclear energy centre concept for fuel cycle plants in the Nuclear Energy Centre Site Survey 1975 (NECSS-75) Rep. No. NUREG-0001, an important study mandated by the US Congress in the Energy Reorganization Act of 1974 which created the NRC. For this study, the NRC defined fuel cycle centres as consisting of fuel reprocessing and mixed-oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle centre sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000-300,000MW(e). The types of fuel cycle facilities located at the fuel cycle centre permit the assessment of the role of fuel cycle centres in enhancing the safeguard of strategic special nuclear materials - plutonium and mixed oxides. Siting fuel cycle centres presents a smaller problem than siting reactors. A single reprocessing plant of the scale projected for use in the USA (1500-2000t/a) can reprocess fuel from reactors producing 50,000-65,000MW(e). Only two or three fuel cycle centres of the upper limit size considered in the NECSS-75 would be required in the USA by the year 2000. The NECSS-75 fuel cycle centre evaluation showed that large-scale fuel cycle centres present no real technical siting difficulties from a radiological effluent and safety standpoint. Some construction economies may be achievable with fuel cycle centres, which offer opportunities to improve waste-management systems. Combined centres consisting of reactors and fuel reprocessing and mixed-oxide fuel fabrication plants were also studied in the NECSS. Such centres can eliminate shipment not only of Pu but also mixed-oxide fuel. Increased fuel cycle costs result from implementation of combined centres unless the fuel reprocessing plants are commercial-sized. Development of Pu-burning reactors could reduce any economic penalties of combined centres. The need for effective fissile

  14. Large scale molecular simulations of nanotoxicity.

    Science.gov (United States)

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells. © 2014 Wiley Periodicals, Inc.

  15. Food appropriation through large scale land acquisitions

    International Nuclear Information System (INIS)

    Cristina Rulli, Maria; D’Odorico, Paolo

    2014-01-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300–550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190–370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations. (letter)

  16. Large-scale tides in general relativity

    Energy Technology Data Exchange (ETDEWEB)

    Ip, Hiu Yan; Schmidt, Fabian, E-mail: iphys@mpa-garching.mpg.de, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the 'separate universe' paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  17. Large scale digital atlases in neuroscience

    Science.gov (United States)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  18. Food appropriation through large scale land acquisitions

    Science.gov (United States)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  19. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  20. State-scale evaluation of renewable electricity policy. The role of renewable electricity credits and carbon taxes

    International Nuclear Information System (INIS)

    Levin, Todd; Thomas, Valerie M.; Lee, Audrey J.

    2011-01-01

    We have developed a state-scale version of the MARKAL energy optimization model, commonly used to model energy policy at the US national scale and internationally. We apply the model to address state-scale impacts of a renewable electricity standard (RES) and a carbon tax in one southeastern state, Georgia. Biomass is the lowest cost option for large-scale renewable generation in Georgia; we find that electricity can be generated from biomass co-firing at existing coal plants for a marginal cost above baseline of 0.2-2.2 cents/kWh and from dedicated biomass facilities for 3.0-5.5 cents/kWh above baseline. We evaluate the cost and amount of renewable electricity that would be produced in-state and the amount of out-of-state renewable electricity credits (RECs) that would be purchased as a function of the REC price. We find that in Georgia, a constant carbon tax to 2030 primarily promotes a shift from coal to natural gas and does not result in substantial renewable electricity generation. We also find that the option to offset a RES with renewable electricity credits would push renewable investment out-of-state. The tradeoff for keeping renewable investment in-state by not offering RECs is an approximately 1% additional increase in the levelized cost of electricity. (author)

  1. Large-Scale Seismic Test Program at Hualien, Taiwan

    International Nuclear Information System (INIS)

    Tang, H.T.; Graves, H.L.; Chen, P.C.

    1992-01-01

    The Large-Scale Seismic Test (LSST) Program at Hualien, Taiwan, is a follow-on to the soil-structure interaction (SSI) experiments at Lotung, Taiwan. The planned SSI studies will be performed at a stiff soil site in Hualien, Taiwan, that historically has had slightly more destructive earthquakes in the past than Lotung. The LSST is a joint effort among many interested parties. Electric Power Research Institute (EPRI) and Taipower are the organizers of the program and have the lead in planning and managing the program. Other organizations participating in the LSST program are US Nuclear Regulatory Commission, the Central Research Institute of Electric Power Industry, the Tokyo Electric Power Company, the Commissariat A L'Energie Atomique, Electricite de France and Framatome. The LSST was initiated in January 1990, and is envisioned to be five years in duration. Based on the assumption of stiff soil and confirmed by soil boring and geophysical results the test model was designed to provide data needed for SSI studies covering: free-field input, nonlinear soil response, non-rigid body SSI, torsional response, kinematic interaction, spatial incoherency and other effects. Taipower had the lead in design of the test model and received significant input from other LSST members. Questions raised by LSST members were on embedment effects, model stiffness, base shear, and openings for equipment. This paper describes progress in site preparation, design and construction of the model and development of an instrumentation plan

  2. Nonideal ultrathin mantle cloak for electrically large conducting cylinders.

    Science.gov (United States)

    Liu, Shuo; Zhang, Hao Chi; Xu, He-Xiu; Cui, Tie Jun

    2014-09-01

    Based on the concept of the scattering cancellation technique, we propose a nonideal ultrathin mantle cloak that can efficiently suppress the total scattering cross sections of an electrically large conducting cylinder (over one free-space wavelength). The cloaking mechanism is investigated in depth based on the Mie scattering theory and is simultaneously interpreted from the perspective of far-field bistatic scattering and near-field distributions. We remark that, unlike the perfect transformation-optics-based cloak, this nonideal cloaking technique is mainly designed to minimize simultaneously several scattering multipoles of a relatively large geometry around considerably broad bandwidth. Numerical simulations and experimental results show that the antiscattering ability of the metasurface gives rise to excellent total scattering reduction of the electrically large cylinder and remarkable electric-field restoration around the cloak. The outstanding cloaking performance together with the good features of and ultralow profile, flexibility, and easy fabrication predict promising applications in the microwave frequencies.

  3. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  4. Developing Large-Scale Bayesian Networks by Composition

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale...

  5. Aging assessment of large electric motors in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Villaran, M.; Subudhi, M. [Brookhaven National Lab., Upton, NY (United States)

    1996-03-01

    Large electric motors serve as the prime movers to drive high capacity pumps, fans, compressors, and generators in a variety of nuclear plant systems. This study examined the stressors that cause degradation and aging in large electric motors operating in various plant locations and environments. The operating history of these machines in nuclear plant service was studied by review and analysis of failure reports in the NPRDS and LER databases. This was supplemented by a review of motor designs, and their nuclear and balance of plant applications, in order to characterize the failure mechanisms that cause degradation, aging, and failure in large electric motors. A generic failure modes and effects analysis for large squirrel cage induction motors was performed to identify the degradation and aging mechanisms affecting various components of these large motors, the failure modes that result, and their effects upon the function of the motor. The effects of large motor failures upon the systems in which they are operating, and on the plant as a whole, were analyzed from failure reports in the databases. The effectiveness of the industry`s large motor maintenance programs was assessed based upon the failure reports in the databases and reviews of plant maintenance procedures and programs.

  6. Aging assessment of large electric motors in nuclear power plants

    International Nuclear Information System (INIS)

    Villaran, M.; Subudhi, M.

    1996-03-01

    Large electric motors serve as the prime movers to drive high capacity pumps, fans, compressors, and generators in a variety of nuclear plant systems. This study examined the stressors that cause degradation and aging in large electric motors operating in various plant locations and environments. The operating history of these machines in nuclear plant service was studied by review and analysis of failure reports in the NPRDS and LER databases. This was supplemented by a review of motor designs, and their nuclear and balance of plant applications, in order to characterize the failure mechanisms that cause degradation, aging, and failure in large electric motors. A generic failure modes and effects analysis for large squirrel cage induction motors was performed to identify the degradation and aging mechanisms affecting various components of these large motors, the failure modes that result, and their effects upon the function of the motor. The effects of large motor failures upon the systems in which they are operating, and on the plant as a whole, were analyzed from failure reports in the databases. The effectiveness of the industry's large motor maintenance programs was assessed based upon the failure reports in the databases and reviews of plant maintenance procedures and programs

  7. Design Performance Standards for Large Scale Wind Farms

    DEFF Research Database (Denmark)

    Gordon, Mark

    2009-01-01

    This document presents, discusses and provides a general guide on electrical performance standard requirements for connection of large scale onshore wind farms into HV transmission networks. Experiences presented here refer mainly to technical requirements and issues encountered during the process...... of connection into the Eastern Australian power system under the Rules and guidelines set out by AEMC and NEMMCO (AEMO). Where applicable some international practices are also mentioned. Standards are designed to serve as a technical envelope under which wind farm proponents design the plant and maintain...... ongoing technical compliance of the plant during its operational lifetime. This report is designed to provide general technical information for the wind farm connection engineer to be aware of during the process of connection, registration and operation of wind power plants interconnected into the HV TSO...

  8. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  9. Large scale road network generalization for vario-scale map

    NARCIS (Netherlands)

    Suba, R.; Meijers, B.M.; Van Oosterom, P.J.M.

    2015-01-01

    The classical approach for road network generalization consists of producing multiple maps, for a different scale or purpose, from a single detailed data source and quite often roads are represented by line objects. Our target is the generalization of a road network for the whole scale range from

  10. Analysis of Electrically Large Antennas using Fast Physical Optics

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Viskum, Hans-Henrik; Meincke, Peter

    2015-01-01

    The design of electrically large antennas can be a significant challenge for computational electromagnetics (CEM) tools, particularly during the final stages of the design process where there are strict requirements for the accuracy. In the present paper, we consider the use of a newly developed ...

  11. Power quality load management for large spacecraft electrical power systems

    Science.gov (United States)

    Lollar, Louis F.

    1988-01-01

    In December, 1986, a Center Director's Discretionary Fund (CDDF) proposal was granted to study power system control techniques in large space electrical power systems. Presented are the accomplishments in the area of power system control by power quality load management. In addition, information concerning the distortion problems in a 20 kHz ac power system is presented.

  12. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse problems * partially separable problems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior -point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  13. Node localization algorithm of wireless sensor network for large electrical equipment monitoring application

    DEFF Research Database (Denmark)

    Chen, Qinyin; Hu, Y.; Chen, Zhe

    2016-01-01

    Node localization technology is an important technology for the Wireless Sensor Networks (WSNs) applications. An improved 3D node localization algorithm is proposed in this paper, which is based on a Multi-dimensional Scaling (MDS) node localization algorithm for large electrical equipment...... is based on the combination of node's Time of Arrival (TOA) and Angle of Arrival (AOA) measurement information. Simulation results show the proposed algorithm outperforms than the traditional Multidimensional Scaling (MDS-MAP) algorithm....

  14. State-of-the-art of large scale biogas plants

    International Nuclear Information System (INIS)

    Prisum, J.M.; Noergaard, P.

    1992-01-01

    A survey of the technological state of large scale biogas plants in Europe treating manure is given. 83 plants are in operation at present. Of these, 16 are centralised digestion plants. Transport costs at centralised digestion plants amounts to between 25 and 40 percent of the total operational costs. Various transport equipment is used. Most large scale digesters are CSTRs, but serial, contact, 2-step, and plug-flow digesters are also found. Construction materials are mostly steel and concrete. Mesophilic digestion is most common (56%), thermophilic digestion is used in 17% of the plants, combined mesophilic and thermophilic digestion is used in 28% of the centralised plants. Mixing of digester content is performed with gas injection, propellers, and gas-liquid displacement. Heating is carried out using external or internal heat exchangers. Heat recovery is only used in Denmark. Gas purification equipment is commonplace, but not often needed. Several plants use separation of the digested manure, often as part of a post-treatment/-purification process or for the production of 'compost'. Screens, sieve belt separaters, centrifuges and filter presses are employed. The use of biogas varies considerably. In some cases, combined heat and power stations are supplying the grid and district heating systems. Other plants use only either the electricity or heat. (au)

  15. Fast large-scale reionization simulations

    NARCIS (Netherlands)

    Thomas, Rajat M.; Zaroubi, Saleem; Ciardi, Benedetta; Pawlik, Andreas H.; Labropoulos, Panagiotis; Jelic, Vibor; Bernardi, Gianni; Brentjens, Michiel A.; de Bruyn, A. G.; Harker, Geraint J. A.; Koopmans, Leon V. E.; Pandey, V. N.; Schaye, Joop; Yatawatta, Sarod; Mellema, G.

    2009-01-01

    We present an efficient method to generate large simulations of the epoch of reionization without the need for a full three-dimensional radiative transfer code. Large dark-matter-only simulations are post-processed to produce maps of the redshifted 21-cm emission from neutral hydrogen. Dark matter

  16. Large scale structure from viscous dark matter

    CERN Document Server

    Blas, Diego; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-01-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale $k_m$ for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale $k_m$, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with $N$-body simulations up to scales $k=0.2 \\, h/$Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to varia...

  17. Large scale wind power penetration in Denmark

    DEFF Research Database (Denmark)

    Karnøe, Peter

    2013-01-01

    he Danish electricity generating system prepared to adopt nuclear power in the 1970s, yet has become the world's front runner in wind power with a national plan for 50% wind power penetration by 2020. This paper deploys a sociotechnical perspective to explain the historical transformation...... of "networks of power" via the interactions of politics, the techno-physics of electrons, and the market setting. The Danish case is about how an assemblage of new agencies has reorganized and reshaped society by building a new sociotechnical network. This has rendered developments highly unpredictable...

  18. Mapping the electrical properties of large-area graphene

    DEFF Research Database (Denmark)

    Bøggild, Peter; Mackenzie, David; Whelan, Patrick Rebsdorf

    2017-01-01

    a more accurate analysis of the graphene film. We review and compare three different, but complementary approaches that rely either on fixed contacts (dry laser lithography), movable contacts (micro four point probes) and non-contact (terahertz time-domain spectroscopy) between the probe and the graphene......, and a high measurement effort per device. In this topical review, we provide a comprehensive overview of the issues that need to be addressed by any large-area characterisation method for electrical key performance indicators, with emphasis on electrical uniformity and on how this can be used to provide...

  19. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  20. Large-scale lateral nanowire arrays nanogenerators

    Science.gov (United States)

    Wang, Zhong L; Xu, Chen; Qin, Yong; Zhu, Guang; Yang, Rusen; Hu, Youfan; Zhang, Yan

    2014-01-07

    In a method of making a generating device, a plurality of spaced apart elongated seen members are deposited onto a surface of a flexible non-conductive substrate. An elongated conductive layer is applied to a top surface and a first side of each seed member, thereby leaving an exposed second side opposite the first side. A plurality of elongated piezoelectric nanostructures is grown laterally from the second side of each seed layer. A second conductive material is deposited onto the substrate adjacent each elongated first conductive layer so as to be soupled the distal end of each of the plurality of elongated piezoelectric nanostructures. The second conductive material is selected so as to form a Schottky barrier between the second conductive material and the distal end of each of the plurality of elongated piezoelectric nanostructures and so as to form an electrical contact with the first conductive layer.

  1. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  2. Modeling Human Behavior at a Large Scale

    Science.gov (United States)

    2012-01-01

    online messages, along with text analysis of those messages, enables us to predict the progress of a contagion from person to person at a population scale...tation, we represent probabilities and likelihoods with their log-counterparts to avoid arithmetic underflow. At testing time, we are interested in...patterns of people taking taxis, rating movies, choosing a cell phone provider, or sharing music are best explained and predicted by the habits of

  3. Large-Scale Atmosphere-Ocean Coupling.

    Science.gov (United States)

    1984-05-01

    stronger tropical teleconnection linking -120 S-4 i-S----4 -0.5 the reference region and the Carribean Sea and vicin-I" I10 I- --s -- I .-’ " ities is...scale coupling between the tropical atmosphere and ocn in relation to the El Nino/Southern Oscillation (USO) phenomenon is studied using both...connection. between Pacific tropical diabatic heating anomalies and extratropical circulation system over the North Pacific from East Asia to the

  4. Large scale features and assessment of spatial scale ...

    Indian Academy of Sciences (India)

    We have proposed here a new analysis procedure to assess the minimum spatial scale at which the two datasets ... Xie et al. (2007) studied the performance of five ...... The financial support received from ISRO RESPOND Pro- gramme is gratefully acknowledged. TMPA data were obtained from NASA website. References.

  5. Active Removal of Large Debris: Electrical Propulsion Capabilities

    Science.gov (United States)

    Billot Soccodato, Carole; Lorand, Anthony; Perrin, Veronique; Couzin, Patrice; FontdecabaBaig, Jordi

    2013-08-01

    The risk for current operational spacecraft or future market induced by large space debris, dead satellites or rocket bodies, in Low Earth Orbit has been identified several years ago. Many potential solutions and architectures are traded with a main objective of reducing cost per debris. Based on cost consideration, specially driven by launch cost, solutions constructed on multi debris capture capacities seem to be much affordable The recent technologic evolutions in electric propulsion and solar power generation can be used to combine high potential vehicles for debris removal. The present paper reports the first results of a study funded by CNES that addresses full electric solutions for large debris removal. Some analysis are currently in progress as the study will end in August. It compares the efficiency of in-orbit Active Removal of typical debris using electric propulsion The electric engine performances used in this analysis are demonstrated through a 2012/2013 PPS 5000 on-ground tests campaign. The traded missions are based on a launch in LEO, the possible vehicle architectures with capture means or contact less, the selection of deorbiting or reorbiting strategy. For contact less strategy, the ion-beam shepherd effect towards the debris problematic will be addressed. Vehicle architecture and performance of the overall system will be stated, showing the adequacy and the limits of each solution.

  6. Recent Progress in Large-Scale Structure

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    I will discuss recent progress in the understanding of how to model galaxy clustering. While recent analyses have focussed on the baryon acoustic oscillations as a probe of cosmology, galaxy redshift surveys contain a lot more information than the acoustic scale. In extracting this additional information three main issues need to be well understood: nonlinear evolution of matter fluctuations, galaxy bias and redshift-space distortions. I will present recent progress in modeling these three effects that pave the way to constraining cosmology and galaxy formation with increased precision.

  7. Large-scale cryopumping for controlled fusion

    International Nuclear Information System (INIS)

    Pittenger, L.C.

    1977-01-01

    Vacuum pumping by freezing out or otherwise immobilizing the pumped gas is an old concept. In several plasma physics experiments for controlled fusion research, cryopumping has been used to provide clean, ultrahigh vacua. Present day fusion research devices, which rely almost universally upon neutral beams for heating, are high gas throughput systems, the pumping of which is best accomplished by cryopumping in the high mass-flow, moderate-to-high vacuum regime. Cryopumping systems have been developed for neutral beam injection systems on several fusion experiments (HVTS, TFTR) and are being developed for the overall pumping of a large, high-throughput mirror containment experiment (MFTF). In operation, these large cryopumps will require periodic defrosting, some schemes for which are discussed, along with other operational considerations. The development of cryopumps for fusion reactors is begun with the TFTR and MFTF systems. Likely paths for necessary further development for power-producing reactors are also discussed

  8. Towards Large-scale Inconsistency Measurement

    OpenAIRE

    Thimm, Matthias

    2015-01-01

    We investigate the problem of inconsistency measurement on large knowledge bases by considering stream-based inconsistency measurement, i.e., we investigate inconsistency measures that cannot consider a knowledge base as a whole but process it within a stream. For that, we present, first, a novel inconsistency measure that is apt to be applied to the streaming case and, second, stream-based approximations for the new and some existing inconsistency measures. We conduct an extensive empirical ...

  9. Large scale fuel oil production experiments

    Energy Technology Data Exchange (ETDEWEB)

    1943-08-04

    The effect of the coal throughput and the composition of the pasting oil, in particular the effect of different middle oil contents in the pasting oil, was previously tested in small scale experiments of hydrogenation of coal. Possibilities of increasing the throughput through the converter when producing heavy oil together with middle oil is shown in this work. The proper industrial detail for the production of heavy oil had to be developed first on a semi-commercial plant. The Upper Silesian coal was used to study the production of gasoline, middle oil, and heavy oil at 700 atm in a 1.6 m/sup 3/ converter and to relate the results with the small scale experiments (10-liter converter). Paste heat exchange was carried out successfully. The following experiments, among others, were carried out: mixed coals were hydrogenated to 100% gasoline plus middle oil, to 65% gasoline and middle oil and 35% heavy oil, as well as 50% gasoline and middle oil plus 50% heavy oil, in part with the usual iron catalyst combination and in part with the sulfurated Bayer mass together with the iron sulfate and sulfigran. The Heinity coal had been hydrogenated with the usual iron catalyst to 65% gasoline and middle oil plus 35% heavy oil. The important results were summarized in a table. Details of the experiments and processes used were given in 3 graphs and 42 tables.

  10. Mapping the electrical properties of large-area graphene

    Science.gov (United States)

    Bøggild, Peter; Mackenzie, David M. A.; Whelan, Patrick R.; Petersen, Dirch H.; Due Buron, Jonas; Zurutuza, Amaia; Gallop, John; Hao, Ling; Jepsen, Peter U.

    2017-12-01

    The significant progress in terms of fabricating large-area graphene films for transparent electrodes, barriers, electronics, telecommunication and other applications has not yet been accompanied by efficient methods for characterizing the electrical properties of large-area graphene. While in the early prototyping as well as research and development phases, electrical test devices created by conventional lithography have provided adequate insights, this approach is becoming increasingly problematic due to complications such as irreversible damage to the original graphene film, contamination, and a high measurement effort per device. In this topical review, we provide a comprehensive overview of the issues that need to be addressed by any large-area characterisation method for electrical key performance indicators, with emphasis on electrical uniformity and on how this can be used to provide a more accurate analysis of the graphene film. We review and compare three different, but complementary approaches that rely either on fixed contacts (dry laser lithography), movable contacts (micro four point probes) and non-contact (terahertz time-domain spectroscopy) between the probe and the graphene film, all of which have been optimized for maximal throughput and accuracy, and minimal damage to the graphene film. Of these three, the main emphasis is on THz time-domain spectroscopy, which is non-destructive, highly accurate and allows both conductivity, carrier density and carrier mobility to be mapped across arbitrarily large areas at rates that by far exceed any other known method. We also detail how the THz conductivity spectra give insights on the scattering mechanisms, and through that, the microstructure of graphene films subject to different growth and transfer processes. The perspectives for upscaling to realistic production environments are discussed.

  11. Networking in a Large-Scale Distributed Agile Project

    OpenAIRE

    Moe, Nils Brede; Šmite, Darja; Šāblis, Aivars; Börjesson, Anne-Lie; Andréasson, Pia

    2014-01-01

    Context: In large-scale distributed software projects the expertise may be scattered across multiple locations. Goal: We describe and discuss a large-scale distributed agile project at Ericsson, a multinational telecommunications company headquartered in Sweden. The project is distributed across four development locations (one in Sweden, one in Korea and two in China) and employs 17 teams. In such a large scale environment the challenge is to have as few dependences between teams as possible,...

  12. Variability in large-scale wind power generation: Variability in large-scale wind power generation

    Energy Technology Data Exchange (ETDEWEB)

    Kiviluoma, Juha [VTT Technical Research Centre of Finland, Espoo Finland; Holttinen, Hannele [VTT Technical Research Centre of Finland, Espoo Finland; Weir, David [Energy Department, Norwegian Water Resources and Energy Directorate, Oslo Norway; Scharff, Richard [KTH Royal Institute of Technology, Electric Power Systems, Stockholm Sweden; Söder, Lennart [Royal Institute of Technology, Electric Power Systems, Stockholm Sweden; Menemenlis, Nickie [Institut de recherche Hydro-Québec, Montreal Canada; Cutululis, Nicolaos A. [DTU, Wind Energy, Roskilde Denmark; Danti Lopez, Irene [Electricity Research Centre, University College Dublin, Dublin Ireland; Lannoye, Eamonn [Electric Power Research Institute, Palo Alto California USA; Estanqueiro, Ana [LNEG, Laboratorio Nacional de Energia e Geologia, UESEO, Lisbon Spain; Gomez-Lazaro, Emilio [Renewable Energy Research Institute and DIEEAC/EDII-AB, Castilla-La Mancha University, Albacete Spain; Zhang, Qin [State Grid Corporation of China, Beijing China; Bai, Jianhua [State Grid Energy Research Institute Beijing, Beijing China; Wan, Yih-Huei [National Renewable Energy Laboratory, Transmission and Grid Integration Group, Golden Colorado USA; Milligan, Michael [National Renewable Energy Laboratory, Transmission and Grid Integration Group, Golden Colorado USA

    2015-10-25

    The paper demonstrates the characteristics of wind power variability and net load variability in multiple power systems based on real data from multiple years. Demonstrated characteristics include probability distribution for different ramp durations, seasonal and diurnal variability and low net load events. The comparison shows regions with low variability (Sweden, Spain and Germany), medium variability (Portugal, Ireland, Finland and Denmark) and regions with higher variability (Quebec, Bonneville Power Administration and Electric Reliability Council of Texas in North America; Gansu, Jilin and Liaoning in China; and Norway and offshore wind power in Denmark). For regions with low variability, the maximum 1 h wind ramps are below 10% of nominal capacity, and for regions with high variability, they may be close to 30%. Wind power variability is mainly explained by the extent of geographical spread, but also higher capacity factor causes higher variability. It was also shown how wind power ramps are autocorrelated and dependent on the operating output level. When wind power was concentrated in smaller area, there were outliers with high changes in wind output, which were not present in large areas with well-dispersed wind power.

  13. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    Since the 1990s, the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. A prominent example of a contemporary spatial development approach...... is the IBA Emscher Park in the Ruhr area in Germany. Over a 10 years period (1988-1998), more than a 100 local transformation projects contributed to the transformation from an industrial to a post-industrial region. The current paradigm of planning by projects reinforces the role of the design disciplines...... within the development of our urban landscapes. At the same time, urban and landscape designers are confronted with new methodological problems. Within a strategic transformation perspective, the formulation of the design problem or brief becomes an integrated part of the design process. This paper...

  14. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    is the IBA Emscher Park in the Ruhr area in Germany. Over a 10 years period (1988-1998), more than a 100 local transformation projects contributed to the transformation from an industrial to a post-industrial region. The current paradigm of planning by projects reinforces the role of the design disciplines......Since the 1990s, the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. A prominent example of a contemporary spatial development approach......-construction designers simultaneously unfold local design issues, i.e. the design brief and possible design interventions. These non-linear explorative proceedings are similar to what researchers in science and technology studies have described as translation. Translation is a central concept within actor...

  15. Large Scale Experiments on Spacecraft Fire Safety

    DEFF Research Database (Denmark)

    Urban, David L.; Ruff, Gary A.; Minster, Olivier

    2012-01-01

    structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1...... to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examin-ing fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model...... validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being developed by an international topical team that is collaboratively defining the experiment requirements and performing supporting analysis, experimentation...

  16. Responses in large-scale structure

    Science.gov (United States)

    Barreira, Alexandre; Schmidt, Fabian

    2017-06-01

    We introduce a rigorous definition of general power-spectrum responses as resummed vertices with two hard and n soft momenta in cosmological perturbation theory. These responses measure the impact of long-wavelength perturbations on the local small-scale power spectrum. The kinematic structure of the responses (i.e., their angular dependence) can be decomposed unambiguously through a ``bias'' expansion of the local power spectrum, with a fixed number of physical response coefficients, which are only a function of the hard wavenumber k. Further, the responses up to n-th order completely describe the (n+2)-point function in the squeezed limit, i.e. with two hard and n soft modes, which one can use to derive the response coefficients. This generalizes previous results, which relate the angle-averaged squeezed limit to isotropic response coefficients. We derive the complete expression of first- and second-order responses at leading order in perturbation theory, and present extrapolations to nonlinear scales based on simulation measurements of the isotropic response coefficients. As an application, we use these results to predict the non-Gaussian part of the angle-averaged matter power spectrum covariance CovNGl=0(k1,k2), in the limit where one of the modes, say k2, is much smaller than the other. Without any free parameters, our model results are in very good agreement with simulations for k2 lesssim 0.06 h Mpc-1, and for any k1 gtrsim 2k2. The well-defined kinematic structure of the power spectrum response also permits a quick evaluation of the angular dependence of the covariance matrix. While we focus on the matter density field, the formalism presented here can be generalized to generic tracers such as galaxies.

  17. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  18. Grid Support in Large Scale PV Power Plants using Active Power Reserves

    OpenAIRE

    Craciun, Bogdan-Ionut

    2014-01-01

    Photovoltaic (PV) systems are in the 3rd place in the renewable energy market, after hydro and wind power. The increased penetration of PV within the electrical power system has led to stability issues of the entire grid in terms of its reliability, availability and security of the supply. As a consequence, Large scale PV Power Plants (LPVPPs) operating in Maximum Power Point (MPP) are not supporting the electrical network, since several grid triggering events or the increased number of downw...

  19. Optimization of Large-Scale Structural Systems

    DEFF Research Database (Denmark)

    Jensen, F. M.

    solutions to small problems with one or two variables to the optimization of large structures such as bridges, ships and offshore structures. The methods used for salving these problems have evolved from being classical differential calculus and calculus of variation to very advanced numerical techniques....... In the same period of time the problems have grown in size and the ongoing research in the various engineering fields has introduced new areas to complicate the optimization task further. These are e.g. structural reliability theory (including new, more complex constraints), discrete optimization (introducing...... new narrow bounds on the optimization variables), s tachastic FEM, vibration theory or multiobjective optimization. At the same time researchers always try to salve problems ahead of to-day's capabilities, thereby utilising current mathematical programming (MP) methods to the limit. However, when...

  20. Large-Scale Structures of Planetary Systems

    Science.gov (United States)

    Murray-Clay, Ruth; Rogers, Leslie A.

    2015-12-01

    A class of solar system analogs has yet to be identified among the large crop of planetary systems now observed. However, since most observed worlds are more easily detectable than direct analogs of the Sun's planets, the frequency of systems with structures similar to our own remains unknown. Identifying the range of possible planetary system architectures is complicated by the large number of physical processes that affect the formation and dynamical evolution of planets. I will present two ways of organizing planetary system structures. First, I will suggest that relatively few physical parameters are likely to differentiate the qualitative architectures of different systems. Solid mass in a protoplanetary disk is perhaps the most obvious possible controlling parameter, and I will give predictions for correlations between planetary system properties that we would expect to be present if this is the case. In particular, I will suggest that the solar system's structure is representative of low-metallicity systems that nevertheless host giant planets. Second, the disk structures produced as young stars are fed by their host clouds may play a crucial role. Using the observed distribution of RV giant planets as a function of stellar mass, I will demonstrate that invoking ice lines to determine where gas giants can form requires fine tuning. I will suggest that instead, disk structures built during early accretion have lasting impacts on giant planet distributions, and disk clean-up differentially affects the orbital distributions of giant and lower-mass planets. These two organizational hypotheses have different implications for the solar system's context, and I will suggest observational tests that may allow them to be validated or falsified.

  1. Goethite Bench-scale and Large-scale Preparation Tests

    Energy Technology Data Exchange (ETDEWEB)

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the

  2. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  3. Superconducting materials for large scale applications

    Energy Technology Data Exchange (ETDEWEB)

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  4. Reviving large-scale projects; La relance des grands chantiers

    Energy Technology Data Exchange (ETDEWEB)

    Desiront, A.

    2003-06-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to

  5. The place of the large electric power consumers in the electric power liberalised market

    International Nuclear Information System (INIS)

    Pavlov, Risto; Chogelja, Goran

    2001-01-01

    In this paper the basic rules of the EC Directive 96/92 of the EU are given. The implementation of the Directive into the Macedonian legislation is analysed. Also, the Directive's influence on the both large electric power consumers and the Macedonian Power System itself is presented

  6. Assessing the role of large hydro in Canada's electricity future

    International Nuclear Information System (INIS)

    Lee Pochih

    1992-01-01

    Electric power in Canada was first generated by steam in the 1880s. The use of hydroelectricity spread rapidly due to abundant water resources and the nationalization of power companies by the provinces; by 1920, 97% of Canadian electricity production came from hydroelectric plants. Thermal generation became competitive by the 1960s, when most of the best hydro sites had been developed, and nuclear generation also started gaining a share of the market. By 1991, hydroelectricity's share of Canadian power production had declined to around 60%. Hydroelectric power has long been used as an instrument of Canadian industrial policy. Given the amount and importance of utility capital expenditures, it was recognized that hydropower development could serve such policy objectives as job creation, industrial development, and macroeconomic stabilization. Creation of provincially owned utilities led to construction of large hydroelectric projects, notably in Quebec, British Columbia, Manitoba, and Newfoundland. The 20 largest hydroelectric power plants in Canada have a total installed capacity of 35,704 MW, representing ca 59% of Canada's total 1991 hydro capacity. The construction of such large projects is not expected to proceed as quickly as in the past because of environmental concerns. However, a number of factors favor continuation of development of hydro resources: a remaining potential estimated at ca 44,000 MW; simplification of electricity export regulations; more stringent air pollution standards that favor non-polluting energy sources; and a moratorium on nuclear power plants in Ontario. 4 tabs

  7. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    Science.gov (United States)

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  8. Converter applications and their influence on large electrical machines

    CERN Document Server

    Drubel, Oliver

    2013-01-01

    Converter driven applications are applied in more and more processes. Almost any installed wind-farm, ship drives, steel mills, several boiler feed water pumps, extruder and many other applications operate much more efficient and economic in case of variable speed solutions. The boundary conditions for a motor or generator will change, if it is supplied by a converter. An electrical machine, which is operated by a converter, can no longer be regarded as an independent component, but is embedded in a system consisting of converter and machine. This book gives an overview of existing converter designs for large electrical machines. Methods for the appropriate calculation of machine phenomena, which are implied by converters are derived in the power range above 500kVA. It is shown how due to the converter inherent higher voltage harmonics and pulse frequencies special phenomena are caused inside the machine which can be the reason for malfunction. It is demonstrated that additional losses create additional tempe...

  9. Large-scale turbulence structures in shallow separating flows

    NARCIS (Netherlands)

    Talstra, H.

    2011-01-01

    The Ph.D. thesis “Large-scale turbulence structures in shallow separating flows” by Harmen Talstra is the result of a Ph.D. research project on large-scale shallow-flow turbulence, which has been performed in the Environmental Fluid Mechanics Laboratory at Delft University of Technology. The

  10. Advances in Modelling of Large Scale Coastal Evolution

    NARCIS (Netherlands)

    Stive, M.J.F.; De Vriend, H.J.

    1995-01-01

    The attention for climate change impact on the world's coastlines has established large scale coastal evolution as a topic of wide interest. Some more recent advances in this field, focusing on the potential of mathematical models for the prediction of large scale coastal evolution, are discussed.

  11. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    were the main reasons for the failure of large-scale jatropha plantations in Ethiopia. The findings in Chapter 3 show that when the use of family labour is combined with easy access to credit and technology, outgrowers on average achieve higher productivity than that obtained on large-scale plantations...

  12. The Large Scale Magnetic Field and Sunspot Cycles

    Indian Academy of Sciences (India)

    tribpo

    We report on the correlation between the large scale magnetic field and sunspot cycles during the last 80 years that was found by Makarov et al. (1999) and Makarov. & Tlatov (2000) in H α spherical harmonics of the large scale magnetic field for. 1915 1999. The sum of intensities of the low modes l = 1 and 3, A(t), was used ...

  13. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  14. ACTIVE DIMENSIONAL CONTROL OF LARGE-SCALED STEEL STRUCTURES

    Directory of Open Access Journals (Sweden)

    Radosław Rutkowski

    2013-09-01

    Full Text Available The article discusses the issues of dimensional control in the construction process of large-scaled steel structures. The main focus is on the analysis of manufacturing tolerances. The article presents the procedure of tolerance analysis usage in process of design and manufacturing of large-scaled steel structures. The proposed solution could significantly improve the manufacturing process.

  15. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  16. Wind power impacts and electricity storage - a time scale perspective

    DEFF Research Database (Denmark)

    Hedegaard, Karsten; Meibom, Peter

    2012-01-01

    technologies – batteries, flow batteries, compressed air energy storage, electrolysis combined with fuel cells, and electric vehicles – are moreover categorised with respect to the time scales at which they are suited to support wind power integration. While all of these technologies are assessed suitable...

  17. Mastering Uncertainty and Risk at Multiple Time Scales in the Future Electrical Grid

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Bent, Russell W. [Los Alamos National Laboratory; Backhaus, Scott N. [Los Alamos National Laboratory

    2012-07-10

    Today's electrical grids enjoy a relatively clean separation of spatio-temporal scales yielding a compartmentalization of grid design, optimization, control and risk assessment allowing for the use of conventional mathematical tools within each area. In contrast, the future grid will incorporate time-intermittent renewable generation, operate via faster electrical markets, and tap the latent control capability at finer grid modeling scales; creating a fundamentally new set of couplings across spatiotemporal scales and requiring revolutionary advances in mathematics techniques to bridge these scales. One example is found in decade-scale grid expansion planning in which today's algorithms assume accurate load forecasts and well-controlled generation. Incorporating intermittent renewable generation creates fluctuating network flows at the hourly time scale, inherently linking the ability of a transmission line to deliver electrical power to hourly operational decisions. New operations-based planning algorithms are required, creating new mathematical challenges. Spatio-temporal scales are also crossed when the future grid's minute-scale fluctuations in network flows (due to intermittent generation) create a disordered state upon which second-scale transient grid dynamics propagate effectively invalidating today's on-line dynamic stability analyses. Addressing this challenge requires new on-line algorithms that use large data streams from new grid sensing technologies to physically aggregate across many spatial scales to create responsive, data-driven dynamic models. Here, we sketch the mathematical foundations of these problems and potential solutions.

  18. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  19. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  20. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  1. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  2. Large-scale multi-configuration electromagnetic induction: a promising tool to improve hydrological models

    Science.gov (United States)

    von Hebel, Christian; Rudolph, Sebastian; Mester, Achim; Huisman, Johan A.; Montzka, Carsten; Weihermüller, Lutz; Vereecken, Harry; van der Kruk, Jan

    2015-04-01

    Large-scale multi-configuration electromagnetic induction (EMI) use different coil configurations, i.e., coil offsets and coil orientations, to sense coil specific depth volumes. The obtained apparent electrical conductivity (ECa) maps can be related to some soil properties such as clay content, soil water content, and pore water conductivity, which are important characteristics that influence hydrological processes. Here, we use large-scale EMI measurements to investigate changes in soil texture that drive the available water supply causing crop development patterns that were observed in leaf area index (LAI) maps obtained from RapidEye satellite images taken after a drought period. The 20 ha test site is situated within the Ellebach catchment (Germany) and consists of a sand-and-gravel dominated upper terrace (UT) and a loamy lower terrace (LT). The large-scale multi-configuration EMI measurements were calibrated using electrical resistivity tomography (ERT) measurements at selected transects and soil samples were taken at representative locations where changes in the electrical conductivity were observed and therefore changing soil properties were expected. By analyzing all the data, the observed LAI patterns could be attributed to buried paleo-river channel systems that contained a higher silt and clay content and provided a higher water holding capacity than the surrounding coarser material. Moreover, the measured EMI data showed highest correlation with LAI for the deepest sensing coil offset (up to 1.9 m), which indicates that the deeper subsoil is responsible for root water uptake especially under drought conditions. To obtain a layered subsurface electrical conductivity model that shows the subsurface structures more clearly, a novel EMI inversion scheme was applied to the field data. The obtained electrical conductivity distributions were validated with soil probes and ERT transects that confirmed the inverted lateral and vertical large-scale electrical

  3. Characteristics of large scale ionic source for JT-60

    Energy Technology Data Exchange (ETDEWEB)

    Fujiwara, Yukio; Honda, Atsushi; Inoue, Takashi [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment] [and others

    1997-02-01

    The Neutral Beam Injection (NBI) apparatus is expected for important role sharing apparatus to realize the plasma electric current drive and the plasma control in not only temperature upgrading of the plasma but also Tokamak nuclear fusion reactor for the next generation such as JT-60, ITER and so forth. Japan Atomic Energy Research Institute has developed the ionic source with high energy and large electric current for about 10 years. Some arrangement tests of the large negative ion source for JT-60 No. 1 were executed from June to October, 1995. As a series of arrangement tests, 400 KeV and 13.5 A of deuterium negative ion beam was successfully accelerated for 0.12 sec. under 0.22 Pa of low gas pressure. And, it was elucidated that electron electric current could be controlled efficiently even in deuterium negative ion beam. Here is described on the testing results in details. (G.K.)

  4. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  5. Robust scene stitching in large scale mobile mapping

    OpenAIRE

    Schouwenaars, Filip; Timofte, Radu; Van Gool, Luc

    2013-01-01

    Schouwenaars F., Timofte R., Van Gool L., ''Robust scene stitching in large scale mobile mapping'', 24th British machine vision conference - BMVC 2013, 11 pp., September 9-13, 2013, Bristol, United Kingdom.

  6. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  7. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  8. Electric and magnetic spectra from MHD to electron scales in the magnetosheath

    Science.gov (United States)

    Matteini, L.; Alexandrova, O.; Chen, C. H. K.; Lacombe, C.

    2017-04-01

    We investigate the transition of the turbulence from large to kinetic scales using Cluster observations. Simultaneous spectra of magnetic and electric fields in the Earth's magnetosheath from magnetohydrodynamic (MHD) to electron scales are presented for the first time. While the two spectra have approximatively similar behaviour in the fluid-MHD regime, they show different trends in the kinetic range. As the magnetic field spectrum steepens at ion scales, the electric field spectrum is characterized by a shallower power law continuing down to electron scales. Such an evolution is consistent with theoretical expectations, assuming that the turbulence is dominated by highly oblique {k}-vectors and that between ion and electron scales the electric field is governed by the non-ideal terms in the generalized Ohm's law. This leads to an expected linear increase of the electric-to-magnetic ratio of fluctuations, consistent with observations presented here. The influence of local whistler wave activity on electron-scale spectra is also discussed.

  9. USAGE OF DISSIMILARITY MEASURES AND MULTIDIMENSIONAL SCALING FOR LARGE SCALE SOLAR DATA ANALYSIS

    Data.gov (United States)

    National Aeronautics and Space Administration — USAGE OF DISSIMILARITY MEASURES AND MULTIDIMENSIONAL SCALING FOR LARGE SCALE SOLAR DATA ANALYSIS Juan M Banda, Rafal Anrgyk ABSTRACT: This work describes the...

  10. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses on ...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....

  11. Achieving Agility and Stability in Large-Scale Software Development

    Science.gov (United States)

    2013-01-16

    question Which software development process are you currently using? 1. Agile software development (e.g., using Scrum , XP practices, test-driven...Architecting in a Complex World Twitter #SEIVirtualEvent © 2013 Carnegie Mellon University Achieving Agility and Stability in Large-Scale...staff in the Research, Technology, and System Solutions Program at the SEI. She is currently engaged in activities focusing on large scale agile

  12. Temporal Variation of Large Scale Flows in the Solar Interior ...

    Indian Academy of Sciences (India)

    tribpo

    Abstract. We attempt to detect short term temporal variations in the rotation rate and other large scale velocity fields in the outer part of the solar convection zone using the ring diagram technique applied to. Michelson Doppler Imager (MDI) data. The measured velocity field shows variations by about 10 m/s on the scale of ...

  13. The Large Scale Structure: Polarization Aspects R. F. Pizzo

    Indian Academy of Sciences (India)

    c Indian Academy of Sciences. The Large Scale Structure: Polarization Aspects. R. F. Pizzo. ASTRON, Postbus 2, 7990 AA Dwingeloo, The Netherlands e-mail: pizzo@astron.nl. Abstract. Polarized radio emission is detected at various scales in the. Universe. In this document, I will briefly review our knowledge on polar-.

  14. Large scale particle image velocimetry with helium filled soap bubbles

    Science.gov (United States)

    Bosbach, Johannes; Kühn, Matthias; Wagner, Claus

    2009-03-01

    The application of Particle Image Velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of Computational Fluid Dynamics simulations.

  15. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  16. Inflationary susceptibilities, duality and large-scale magnetic fields generation

    CERN Document Server

    Giovannini, Massimo

    2013-01-01

    We investigate what can be said about the interaction of scalar fields with Abelian gauge fields during a quasi-de Sitter phase of expansion and under the assumption that the electric and the magnetic susceptibilities do not coincide. The duality symmetry, transforming the magnetic susceptibility into the inverse of the electric susceptibility, exchanges the magnetic and electric power spectra. The mismatch between the two susceptibilities determines an effective refractive index affecting the evolution of the canonical fields. The constraints imposed by the duration of the inflationary phase and by the magnetogenesis requirements pin down the rate of variation of the susceptibilities that is consistent with the observations of the magnetic field strength over astrophysical and cosmological scales but avoids back-reaction problems. The parameter space of this magnetogenesis scenario is wider than in the case when the susceptibilities are equal, as it happens when the inflaton or some other spectator field is ...

  17. Challenges with Scaling Scrum to Large-Scale Software Development: A Case Study

    OpenAIRE

    Jensen, Simen

    2017-01-01

    Agile software development methods have become popular since the introduction of the Agile Manifesto in 2001. Agile methods, such as Scrum, are originally created for small co-located teams but have been adopted to large-scale development organizations. The accompanying challenges of using Scrum in large-scale development are not fully explored and understood. This thesis aim to explore and identify challenges regarding large-scale agile development in a global software development organizati...

  18. Electric dipole moments in the MSSM at large tan β

    International Nuclear Information System (INIS)

    Demir, D.; Olive, K.A.; Pospelov, M.; Ritz, A.

    2003-11-01

    Within the minimal supersymmetric standard model (MSSM), the large tan β regime can lead to important modifications in the pattern of CP-violating sources contributing to low energy electric dipole moments (EDMs). In particular, four-fermion CP-violating interactions induced by Higgs exchange should be accounted for alongside the constituent EDMs of quarks and electrons. To this end, we present a comprehensive analysis of three low energy EDM observables - namely the EDMs of thallium, mercury and the neutron - at large tan β, in terms of one- and two-loop contributions to the constituent EDMs and four-fermion interactions. We concentrate on the constrained MSSM as well as the MSSM with non-universal Higgs masses, and include the CP-violating phases of μ and A. Our results indicate that the atomic EDMs receive significant corrections from four-fermion operators, especially when Im(A) is the only CP-violating source, whereas the neutron EDM remains relatively insensitive to these effects. As a consequence, in a large portion of the parameter space, one cannot infer a separate bound on the electron EDM via the experimental constraint on the thallium EDM. Furthermore, we find that the electron EDM can be greatly reduced due to the destructive interference of one- and two-loop contributions with the latter being dominated by virtual staus. (orig.)

  19. Nucleon electric dipole moments in high-scale supersymmetric models

    International Nuclear Information System (INIS)

    Hisano, Junji; Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi

    2015-01-01

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  20. Algorithms for Electromagnetic Scattering Analysis of Electrically Large Structures

    DEFF Research Database (Denmark)

    Borries, Oscar Peter

    Accurate analysis of electrically large antennas is often done using either Physical Optics (PO) or Method of Moments (MoM), where the former typically requires fewer computational resources but has a limited application regime. This study has focused on fast variants of these two methods......, by several authors, been dismissed as being too memory intensive. In the present work, we demonstrate for the first time that by including a range of both novel and previously presented modifications to the standard MLFMM implementation, HO MLFMM can achieve both memory reduction and significant speed...... band. Accelerating PO is an entirely different matter. A few authors have discussed applying the Fast-PO technique to far fields, achieving relative errors of 0.1%−1% for moderately sized scatterers. For near-fields, the state-of-the-art implementation of Fast-PO has several difficulties, in particular...

  1. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    Science.gov (United States)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  2. Cost Overruns in Large-scale Transportation Infrastructure Projects

    DEFF Research Database (Denmark)

    Cantarelli, Chantal C; Flyvbjerg, Bent; Molin, Eric J. E

    2010-01-01

    Managing large-scale transportation infrastructure projects is difficult due to frequent misinformation about the costs which results in large cost overruns that often threaten the overall project viability. This paper investigates the explanations for cost overruns that are given in the literature...

  3. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Recent increases in commodity prices have led some governments and private investors to purchase or lease large tracts of land in foreign countries for producing their own food and biofuel. ... They will use the results to increase public awareness and knowledge about the consequences of large-scale land acquisitions.

  4. Large Scale Survey Data in Career Development Research

    Science.gov (United States)

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  5. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured...

  6. A large-scale industrial CT's data transfer system

    International Nuclear Information System (INIS)

    Chen Xuesong

    2004-01-01

    The large-scale industrial CT generates a large amount of data when it works. To guarantee the reliability of the real-time transfers of those data, the author designs a project by using WLAN technology. And it solves the bottleneck caused by the data rate limitation by using multi-thread technology. (author)

  7. Large-Scale Inverse Problems and Quantification of Uncertainty

    CERN Document Server

    Biegler, Lorenz; Ghattas, Omar

    2010-01-01

    Large-scale inverse problems and associated uncertainty quantification has become an important area of research, central to a wide range of science and engineering applications. Written by leading experts in the field, Large-scale Inverse Problems and Quantification of Uncertainty focuses on the computational methods used to analyze and simulate inverse problems. The text provides PhD students, researchers, advanced undergraduate students, and engineering practitioners with the perspectives of researchers in areas of inverse problems and data assimilation, ranging from statistics and large-sca

  8. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  9. Enhancing microelectronics education with large-scale student projects

    OpenAIRE

    Rumpf, Clemens; Lidtke, Aleksander; Weddell, Alex; Maunder, Rob

    2016-01-01

    This paper discusses the benefits of using large-scale projects, involving many groups of students with different backgrounds, in the education of undergraduate microelectronics engineering students. The benefits of involving students in large, industry-like projects are first briefly reviewed. The organisation of undergraduate programmes is presented, and it is described how students can be involved in such large projects, while maintaining compatibility with undergraduate programmes. The ge...

  10. Acoustic Studies of the Large Scale Ocean Circulation

    Science.gov (United States)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  11. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  12. Solar Total Energy System: Large Scale Experiment, Shenandoah, Georgia. Final technical progress report. Volume I. Section 1. Conclusions and recommendations. Section 2. Systems requirements. [1. 72-MW thermal and 383. 6-kW electric power for 42,000 ft/sup 2/ knitwear plant

    Energy Technology Data Exchange (ETDEWEB)

    None,

    1977-10-17

    The Stearns-Roger Engineering Company conceptual design of ERDA's Large Scale Experiment No. 2 (LSE No. 2) is described. The various LSE's are part of ERDA's Solar Total Energy Program (STES) and a separate activity of the National Solar Thermal Power Systems Program. The object of this LSE is to design, construct, test, evaluate and operate a STES for the purpose of obtaining experience with large scale hardware systems and to establish engineering capability for subsequent demonstration projects. This particular LSE is to be located at Shenandoah, Georgia and will provide power to the Bleyle knitwear factory. The Solar Total Energy system is sized to supply 1.720 MW thermal power (both space heating and process heat) and 383.6 KW electrical power. The STES is sized for the extended knitwear plant of 3902 M/sup 2/ (42,000 sq-ft) which will eventually employ 300 people. The section on conclusions and recommendations described the baseline design recommendation, facility requirements, the solar system, power conversion system, schedules and cost, and additional candidate systems. The systems requirements analysis includes detailed descriptions and analyses of the following subtasks: load analysis, energy displacement, local laws and ordinances, life cycle cost, health and safety, environmental assessment, reliability assessment, and utility interface. (WHK)

  13. ElGENANALYSlS OF LARGE ELECTRIC POWER SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Elwood, D. M.

    1991-02-01

    Modern electric power systems are large and complicated, and, in many regions, the generation and transmission systems are operating near their limits. Eigenanalysis is one of the tools used to analyze the behavior of these systems. Standard eigenvalue methods require that simplified models be used for these analyses; however, these simplified models do not adequately model all of the characteristics of large power systems. Thus, new eigenanalysis methods that can analyze detailed power system models are required. The primary objectives of the work described in this report were I) to determine the availability of eigenanalysis algorithms that are better than methods currently being applied and that could be used an large power systems and 2) to determine if vector supercomputers could be used to significantly increase the size of power systems that can be analyzed by a standard power system eigenanalysis code. At the request of the Bonneville Power Administration, the Pacific Northwest Laboratory (PNL) conducted a literature review of methods currently used for the eigenanalysis of large electric power systems, as well as of general eigenanalysis algorithms that are applicable to large power systems. PNL found that a number of methods are currently being used for the this purpose, and all seem to work fairly well. Furthermore, most of the general eigenanalysis techniques that are applicable to power systems have been tried on these systems, and most seem to work fairly well. One of these techniques, a variation of the Arnoldi method, has been incorporated into a standard power system eigenanalysis package. Overall, it appears that the general purpose eigenanalysis methods are more versatile than most of the other methods that have been used for power systems eigenanalysis. In addition, they are generally easier to use. For some problems, however, it appears that some of the other eigenanalysis methods may be better. Power systems eigenanalysis requires the

  14. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  15. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  16. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  17. Modeling large scale cohesive sediment transport affected by small scale biological activity

    NARCIS (Netherlands)

    Borsje, Bastiaan Wijnand; de Vries, Mindert; Hulscher, Suzanne J.M.H.; de Boer, Gerben J.

    2008-01-01

    Biological activity on the bottom of the seabed is known to have significant influence on the dynamics of cohesive sediment on a small spatial and temporal scale. In this study, we aim to understand the large-scale effects of small-scale biological activity. Hereto, effects of biology are

  18. Energy System Analysis of Large-Scale Integration of Wind Power

    International Nuclear Information System (INIS)

    Lund, Henrik

    2003-11-01

    The paper presents the results of two research projects conducted by Aalborg University and financed by the Danish Energy Research Programme. Both projects include the development of models and system analysis with focus on large-scale integration of wind power into different energy systems. Market reactions and ability to exploit exchange on the international market for electricity by locating exports in hours of high prices are included in the analyses. This paper focuses on results which are valid for energy systems in general. The paper presents the ability of different energy systems and regulation strategies to integrate wind power, The ability is expressed by three factors: One factor is the degree of electricity excess production caused by fluctuations in wind and CHP heat demands. The other factor is the ability to utilise wind power to reduce CO 2 emission in the system. And the third factor is the ability to benefit from exchange of electricity on the market. Energy systems and regulation strategies are analysed in the range of a wind power input from 0 to 100% of the electricity demand. Based on the Danish energy system, in which 50 per cent of the electricity demand is produced in CHP, a number of future energy systems with CO 2 reduction potentials are analysed, i.e. systems with more CHP, systems using electricity for transportation (battery or hydrogen vehicles) and systems with fuel-cell technologies. For the present and such potential future energy systems different regulation strategies have been analysed, i.e. the inclusion of small CHP plants into the regulation task of electricity balancing and grid stability and investments in electric heating, heat pumps and heat storage capacity. Also the potential of energy management has been analysed. The results of the analyses make it possible to compare short-term and long-term potentials of different strategies of large-scale integration of wind power

  19. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  20. Dynamic scaling and large scale effects in turbulence in compressible stratified fluid

    International Nuclear Information System (INIS)

    Pharasi, Hirdesh K.; Bhattacharjee, Jayanta K.

    2016-01-01

    We consider the propagation of sound in a turbulent fluid which is confined between two horizontal parallel plates, maintained at different temperatures. In the homogeneous fluid, Staroselsky et al. had predicted a divergent sound speed at large length scales. Here we find a divergent sound speed and a vanishing expansion coefficient at large length scales. Dispersion relation and the question of scale invariance at large distance scales lead to these results. - Highlights: • Turbulence in a stratified fluid has been studied in the Boussinesq approximation. • We extend this study to include density fluctuations due to pressure fluctuations. • For a homogeneous weakly compressible fluid the sound speed is known to become scale dependent. • For the stratified fluid we show that the expansion coefficient is also scale dependent. • Our results are based on general dynamic scaling arguments rather than detailed calculation.

  1. Heterogeneous grain-scale response in ferroic polycrystals under electric field

    DEFF Research Database (Denmark)

    Daniels, John E.; Majkut, Marta; Cao, Qingua

    2016-01-01

    -ray diffraction (3D-XRD) is used to resolve the non-180° ferroelectric domain switching strain components of 191 grains from the bulk of a polycrystalline electro-ceramic that has undergone an electric-field-induced phase transformation. It is found that while the orientation of a given grain relative...... to the field direction has a significant influence on the phase and resultant domain texture, there are large deviations from the average behaviour at the grain scale. It is suggested that these deviations arise from local strain and electric field neighbourhoods being highly heterogeneous within the bulk...

  2. Scaling Law between Urban Electrical Consumption and Population in China

    Science.gov (United States)

    Zhu, Xiaowu; Xiong, Aimin; Li, Liangsheng; Liu, Maoxin; Chen, X. S.

    The relation between the household electrical consumption Y and population N for Chinese cities in 2006 has been investigated with the power law scaling form Y = A_0 N^{β}. It is found that the Chinese cities should be divided into three categories characterized by different scaling exponent β. The first category, which includes the biggest and coastal cities of China, has the scaling exponent β> 1. The second category, which includes mostly the cities in central China, has the scaling exponent β ≈ 1. The third category, which consists of the cities in northwestern China, has the scaling exponent β 1, there is also a fixed point population N f . If the initial population N(0) > N f , the population increases very fast with time and diverges within a finite time. If the initial population N(0) < N f , the population decreases with time and collapse finally. The pattern of population evolution in a city is determined by its scaling exponent and initial population.

  3. Impacts of Large Scale Wind Penetration on Energy Supply Industry

    Directory of Open Access Journals (Sweden)

    John Kabouris

    2009-11-01

    Full Text Available Large penetration of Renewable Energy Sources (RES impacts Energy Supply Industry (ESI in many aspects leading to a fundamental change in electric power systems. It raises a number of technical challenges to the Transmission System Operators (TSOs, Distribution System Operators (DSOs and Wind Turbine Generators (WTG constructors. This paper aims to present in a thorough and coherent way the redrawn picture for Energy Systems under these conditions. Topics related to emergent technical challenges, technical solutions required and finally the impact on ESI due to large wind power penetration, are analyzed. Finally, general conclusions are extracted about the ESI current and future state and general directions are recommended.

  4. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension of the para......Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...... of the parameter vector. A new design matrix free algorithm is proposed for computing the penalized maximum likelihood estimate for GLAMs, which, in particular, handles nondifferentiable penalty functions. The proposed algorithm is implemented and available via the R package glamlasso. It combines several ideas...

  5. Control Algorithms for Large-scale Single-axis Photovoltaic Trackers

    Directory of Open Access Journals (Sweden)

    Dorian Schneider

    2012-01-01

    Full Text Available The electrical yield of large-scale photovoltaic power plants can be greatly improved by employing solar trackers. While fixed-tilt superstructures are stationary and immobile, trackers move the PV-module plane in order to optimize its alignment to the sun. This paper introduces control algorithms for single-axis trackers (SAT, including a discussion for optimal alignment and backtracking. The results are used to simulate and compare the electrical yield of fixed-tilt and SAT systems. The proposed algorithms have been field tested, and are in operation in solar parks worldwide.

  6. Line Capacity Expansion and Transmission Switching in Power Systems With Large-Scale Wind Power

    DEFF Research Database (Denmark)

    Villumsen, Jonas Christoffer; Bronmo, Geir; Philpott, Andy B.

    2013-01-01

    In 2020 electricity production from wind power should constitute nearly 50% of electricity demand in Denmark. In this paper we look at optimal expansion of the transmission network in order to integrate 50% wind power in the system, while minimizing total fixed investment cost and expected cost...... of power generation. We allow for active switching of transmission elements to reduce congestion effects caused by Kirchhoff's voltage law. Results show that actively switching transmission lines may yield a better utilization of transmission networks with large-scale wind power and increase wind power...... penetration. Furthermore, it is shown that transmission switching is likely to affect the optimal line capacity expansion plan....

  7. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed

    2017-03-16

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  8. Privacy Preserving Large-Scale Rating Data Publishing

    Directory of Open Access Journals (Sweden)

    Xiaoxun Sun

    2013-02-01

    Full Text Available Large scale rating data usually contains both ratings of sensitive and non-sensitive issues, and the ratings of sensitive issues belong to personal privacy. Even when survey participants do not reveal any of their ratings, their survey records are potentially identifiable by using information from other public sources. In order to protect the privacy in the large-scale rating data, it is important to propose new privacy principles which consider the properties of the rating data. Moreover, given the privacy principle, how to efficiently determine whether the rating data satisfied the required privacy principle is crucial as well. Furthermore, if the privacy principle is not satisfied, an efficient method is needed to securely publish the large-scale rating data. In this paper, all these problem will be addressed.

  9. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  10. Partitioning Large Scale Deep Belief Networks Using Dropout

    OpenAIRE

    Huang, Yanping; Zhang, Sai

    2015-01-01

    Deep learning methods have shown great promise in many practical applications, ranging from speech recognition, visual object recognition, to text processing. However, most of the current deep learning methods suffer from scalability problems for large-scale applications, forcing researchers or users to focus on small-scale problems with fewer parameters. In this paper, we consider a well-known machine learning model, deep belief networks (DBNs) that have yielded impressive classification per...

  11. Spatiotemporal property and predictability of large-scale human mobility

    Science.gov (United States)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  12. Large-scale liquid scintillation detectors for solar neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Benziger, Jay B.; Calaprice, Frank P. [Princeton University Princeton, Princeton, NJ (United States)

    2016-04-15

    Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed. (orig.)

  13. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....

  14. Large-scale structure in the universe: Theory vs observations

    International Nuclear Information System (INIS)

    Kashlinsky, A.; Jones, B.J.T.

    1990-01-01

    A variety of observations constrain models of the origin of large scale cosmic structures. We review here the elements of current theories and comment in detail on which of the current observational data provide the principal constraints. We point out that enough observational data have accumulated to constrain (and perhaps determine) the power spectrum of primordial density fluctuations over a very large range of scales. We discuss the theories in the light of observational data and focus on the potential of future observations in providing even (and ever) tighter constraints. (orig.)

  15. The survey of large-scale query classification

    Science.gov (United States)

    Zhou, Sanduo; Cheng, Kefei; Men, Lijun

    2017-04-01

    In recent years, a lot of researches have been done on query classification. The paper introduces the recent researches on query classification in detail, mainly including the source of query log, the category systems, the feature extraction methods, classification methods and the evaluation methodology. Then it discusses the issues of large-scale query classification and the solved methods combined with big data analysis systems. The research result shows there still are several problems and challenges, such as lack of authoritative classification system and evaluation methodology, efficiency of the feature extraction method, uncertainty of the performance on large-scale query log and the further query classification on the big data platform, etc.

  16. The CLASSgal code for Relativistic Cosmological Large Scale Structure

    CERN Document Server

    Di Dio, Enea; Lesgourgues, Julien; Durrer, Ruth

    2013-01-01

    We present some accurate and efficient computations of large scale structure observables, obtained with a modified version of the CLASS code which is made publicly available. This code includes all relativistic corrections and computes both the power spectrum Cl(z1,z2) and the corresponding correlation function xi(theta,z1,z2) in linear perturbation theory. For Gaussian initial perturbations, these quantities contain the full information encoded in the large scale matter distribution at the level of linear perturbation theory. We illustrate the usefulness of our code for cosmological parameter estimation through a few simple examples.

  17. [Issues of large scale tissue culture of medicinal plant].

    Science.gov (United States)

    Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai

    2014-09-01

    In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.

  18. Participatory Design of Large-Scale Information Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    In this article we discuss how to engage in large-scale information systems development by applying a participatory design (PD) approach that acknowledges the unique situated work practices conducted by the domain experts of modern organizations. We reconstruct the iterative prototyping approach...... into a PD process model that (1) emphasizes PD experiments as transcending traditional prototyping by evaluating fully integrated systems exposed to real work practices; (2) incorporates improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extends initial...... and discuss three challenges to address when dealing with large-scale systems development....

  19. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  20. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  1. A new international role for large electric utilities

    International Nuclear Information System (INIS)

    Johnson, P. M.

    1993-01-01

    Population pressures leading to changes in India, China, and South America during the next twenty-five years and the resulting revolutionary shifts in the world's major economic axes, such as growth in populations, in demand for consumer goods, in production capacities, and in energy demand, will demand greater international cooperation according to a former premier of the province of Quebec. He stressed in particular, the contributions that large electrical utilities can play in this world-wide transformation. He predicted the possibility of privatization and an extended role in international energy activities for Hydro-Quebec as a result of these major demographic and economic changes in Asia and South America, and the consequent decline in the economies of the G7 countries. Major capital investments abroad, and the formation of networks of domestic and foreign partnerships in the developing world were predicted to be the key to the survival and continuing success not only of Hydro-Quebec, but all major utility companies

  2. Visualizing large-scale uncertainty in astrophysical data.

    Science.gov (United States)

    Li, Hongwei; Fu, Chi-Wing; Li, Yinggang; Hanson, Andrew

    2007-01-01

    Visualization of uncertainty or error in astrophysical data is seldom available in simulations of astronomical phenomena, and yet almost all rendered attributes possess some degree of uncertainty due to observational error. Uncertainties associated with spatial location typically vary signicantly with scale and thus introduce further complexity in the interpretation of a given visualization. This paper introduces effective techniques for visualizing uncertainty in large-scale virtual astrophysical environments. Building upon our previous transparently scalable visualization architecture, we develop tools that enhance the perception and comprehension of uncertainty across wide scale ranges. Our methods include a unified color-coding scheme for representing log-scale distances and percentage errors, an ellipsoid model to represent positional uncertainty, an ellipsoid envelope model to expose trajectory uncertainty, and a magic-glass design supporting the selection of ranges of log-scale distance and uncertainty parameters, as well as an overview mode and a scalable WIM tool for exposing the magnitudes of spatial context and uncertainty.

  3. Entirely renewable energy-based electricity supply system (small scale)

    Energy Technology Data Exchange (ETDEWEB)

    Zahedi, A. [Monash Univ., Caulfield (Australia). Div. of Electrical and Computer Systems Engineering

    1997-12-31

    This paper presents a system comprising of a renewable source of energy and an energy storage device to smooth the power fluctuations. In order to investigate the performance of the system, an exact mathematical model for the system has been developed. Because of non-linearity of the mathematical model a computational method is used for performance investigation of the system. The objective of the paper is to present an entirely renewable energy based electricity supply system (small scale), to suggest the mathematical model of the system and computational method to analyze the performance of the system.

  4. Geometric algorithms for electromagnetic modeling of large scale structures

    Science.gov (United States)

    Pingenot, James

    With the rapid increase in the speed and complexity of integrated circuit designs, 3D full wave and time domain simulation of chip, package, and board systems becomes more and more important for the engineering of modern designs. Much effort has been applied to the problem of electromagnetic (EM) simulation of such systems in recent years. Major advances in boundary element EM simulations have led to O(n log n) simulations using iterative methods and advanced Fast. Fourier Transform (FFT), Multi-Level Fast Multi-pole Methods (MLFMM), and low-rank matrix compression techniques. These advances have been augmented with an explosion of multi-core and distributed computing technologies, however, realization of the full scale of these capabilities has been hindered by cumbersome and inefficient geometric processing. Anecdotal evidence from industry suggests that users may spend around 80% of turn-around time manipulating the geometric model and mesh. This dissertation addresses this problem by developing fast and efficient data structures and algorithms for 3D modeling of chips, packages, and boards. The methods proposed here harness the regular, layered 2D nature of the models (often referred to as "2.5D") to optimize these systems for large geometries. First, an architecture is developed for efficient storage and manipulation of 2.5D models. The architecture gives special attention to native representation of structures across various input models and special issues particular to 3D modeling. The 2.5D structure is then used to optimize the mesh systems First, circuit/EM co-simulation techniques are extended to provide electrical connectivity between objects. This concept is used to connect independently meshed layers, allowing simple and efficient 2D mesh algorithms to be used in creating a 3D mesh. Here, adaptive meshing is used to ensure that the mesh accurately models the physical unknowns (current and charge). Utilizing the regularized nature of 2.5D objects and

  5. The effective field theory of cosmological large scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Carrasco, John Joseph M. [Stanford Univ., Stanford, CA (United States); Hertzberg, Mark P. [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Senatore, Leonardo [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  6. Vibration amplitude rule study for rotor under large time scale

    International Nuclear Information System (INIS)

    Yang Xuan; Zuo Jianli; Duan Changcheng

    2014-01-01

    The rotor is an important part of the rotating machinery; its vibration performance is one of the important factors affecting the service life. This paper presents both theoretical analyses and experimental demonstrations of the vibration rule of the rotor under large time scales. The rule can be used for the service life estimation of the rotor. (authors)

  7. Success Factors of Large Scale ERP Implementation in Thailand

    OpenAIRE

    Rotchanakitumnuai; Siriluck

    2010-01-01

    The objectives of the study are to examine the determinants of ERP implementation success factors of ERP implementation. The result indicates that large scale ERP implementation success consist of eight factors: project management competence, knowledge sharing, ERP system quality , understanding, user involvement, business process re-engineering, top management support, organization readiness.

  8. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 2. Fractals and the Large-Scale Structure in the Universe - Introduction and Basic Concepts. A K Mittal T R Seshadri. General Article Volume 7 Issue 2 February 2002 pp 6-19 ...

  9. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  10. Chain Analysis for large-scale Communication systems

    NARCIS (Netherlands)

    Grijpink, Jan|info:eu-repo/dai/nl/095130861

    2010-01-01

    The chain concept is introduced to explain how large-scale information infrastructures so often fail and sometimes even backfire. Next, the assessment framework of the doctrine of Chain-computerisation and its chain analysis procedure are outlined. In this procedure chain description precedes

  11. Water Implications of Large-Scale Land Acquisitions in Ghana

    Directory of Open Access Journals (Sweden)

    Timothy Olalekan Williams

    2012-06-01

    The paper offers recommendations which can help the government to achieve its stated objective of developing a "policy framework and guidelines for large-scale land acquisitions by both local and foreign investors for biofuels that will protect the interests of investors and the welfare of Ghanaian farmers and landowners".

  12. Origin of large-scale cell structure in the universe

    International Nuclear Information System (INIS)

    Zel'dovich, Y.B.

    1982-01-01

    A qualitative explanation is offered for the characteristic global structure of the universe, wherein ''black'' regions devoid of galaxies are surrounded on all sides by closed, comparatively thin, ''bright'' layers populated by galaxies. The interpretation rests on some very general arguments regarding the growth of large-scale perturbations in a cold gas

  13. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  14. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  15. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 4. Fractals and the Large-Scale Structure in the Universe - Is the Cosmological Principle Valid? A K Mittal T R Seshadri. General Article Volume 7 Issue 4 April 2002 pp 39-47 ...

  16. Evaluating Large-scale National Public Management Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Morten Balle

    This article explores differences and similarities between two evaluations of large-scale administrative reforms which were carried out in the 2000s: The evaluation of the Norwegian NAV reform (EVANAV) and the evaluation of the Danish Local Government Reform (LGR). We provide a comparative analysis...

  17. Invertebrates or iron: does large-scale opencast mining impact ...

    African Journals Online (AJOL)

    The results were, however, confounded by the fact that the resting eggs of pan inhabitants could remain dormant in the sediment for decades; suggesting that ... Similarly, the preservation of conservation areas and a landscape wide management system were proposed to ensure that large-scale ecological process are not ...

  18. Reconsidering Replication: New Perspectives on Large-Scale School Improvement

    Science.gov (United States)

    Peurach, Donald J.; Glazer, Joshua L.

    2012-01-01

    The purpose of this analysis is to reconsider organizational replication as a strategy for large-scale school improvement: a strategy that features a "hub" organization collaborating with "outlet" schools to enact school-wide designs for improvement. To do so, we synthesize a leading line of research on commercial replication to construct a…

  19. The large scale microwave background anisotropy in decaying particle cosmology

    International Nuclear Information System (INIS)

    Panek, M.

    1987-06-01

    We investigate the large-scale anisotropy of the microwave background radiation in cosmological models with decaying particles. The observed value of the quadrupole moment combined with other constraints gives an upper limit on the redshift of the decay z/sub d/ < 3-5. 12 refs., 2 figs

  20. Resolute large scale mining company contribution to health services of

    African Journals Online (AJOL)

    Introduction: In 1995 Tanzanian Government reformed the mining industry and the new policy allowed an involvement of multinational companies but the communities living near new large scale gold mines were expected to benefit from the industry in terms of socio economic, health, education, employment, safe drinking ...

  1. Large Scale Magnetic Fields: Density Power Spectrum in Redshift ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    netic fields to have significant impact on the large scale structure at present. Magnetic fields of a more recent ... are produced at the time of inflation in the very early universe. Larger surveys like the on-going ... fields and their impact on redshift space power spectrum and give our main results. In section 4 we summarize our ...

  2. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...... procedure with the dual variables....

  3. Temporal Variation of Large Scale Flows in the Solar Interior ...

    Indian Academy of Sciences (India)

    tribpo

    Temporal Variation of Large Scale Flows in the Solar Interior. 355. Figure 2. Zonal and meridional components of the time-dependent residual velocity at a few selected depths as marked above each panel, are plotted as contours of constant velocity in the longitude-latitude plane. The left panels show the zonal component, ...

  4. The Cosmology Large Angular Scale Surveyor (CLASS) Telescope Architecture

    Science.gov (United States)

    Chuss, David T.; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Colazo, Felipe; hide

    2014-01-01

    We describe the instrument architecture of the Johns Hopkins University-led CLASS instrument, a groundbased cosmic microwave background (CMB) polarimeter that will measure the large-scale polarization of the CMB in several frequency bands to search for evidence of inflation.

  5. Description of a Large-Scale Micro-Teaching Program.

    Science.gov (United States)

    Webb, Clark; And Others

    This report describes the implementation of a large-scale program at Brigham Young University to provide for at least one microteaching experience for each of 730 students enrolled in a beginning education course. A definition of microteaching (the creation of a miniature teaching situation under controlled conditions) and the elements which make…

  6. Small and large scale genomic DNA isolation protocol for chickpea ...

    African Journals Online (AJOL)

    Small and large scale genomic DNA isolation protocol for chickpea ( Cicer arietinum L.), suitable for molecular marker and transgenic analyses. ... Chickpea is an important food legume crop with high nutritional value. Lack of appropriate DNA isolation protocol is a limiting factor for any molecular studies of this crop.

  7. The Large-Scale Structure of Scientific Method

    Science.gov (United States)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  8. The interaction of large scale and mesoscale environment leading to ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 118; Issue 5. The interaction of large scale and mesoscale environment leading to formation of intense thunderstorms over Kolkata. Part I: Doppler radar and satellite observations. P Mukhopadhyay M Mahakur H A K Singh. Volume 118 Issue 5 October 2009 pp ...

  9. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    The objective of this project is to test whether the Food and Agriculture Organization's Voluntary Guidelines on the Responsible Governance of Tenure of Land, Fisheries and Forests in the Context of National Food Security can help increase accountability for large-scale land acquisitions in Mali, Nigeria, Uganda, and South ...

  10. Participatory Design and the Challenges of Large-Scale Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    With its 10th biannual anniversary conference, Participatory Design (PD) is leaving its teens and must now be considered ready to join the adult world. In this article we encourage the PD community to think big: PD should engage in large-scale information-systems development and opt for a PD...

  11. A Large-Scale Earth and Ocean Phenomenon

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 2. Tsunamis - A Large-Scale Earth and Ocean Phenomenon. Satish R Shetye. General Article Volume 10 Issue 2 February 2005 pp 8-19. Fulltext. Click here to view fulltext PDF. Permanent link:

  12. Large-Scale Innovation and Change in UK Higher Education

    Science.gov (United States)

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  13. Factors Influencing Uptake of a Large Scale Curriculum Innovation.

    Science.gov (United States)

    Adey, Philip S.

    Educational research has all too often failed to be implemented on a large-scale basis. This paper describes the multiplier effect of a professional development program for teachers and for trainers in the United Kingdom, and how that program was developed, monitored, and evaluated. Cognitive Acceleration through Science Education (CASE) is a…

  14. Symmetry in stochasticity: Random walk models of large-scale ...

    Indian Academy of Sciences (India)

    This paper describes the insights gained from the excursion set approach, in which various questions about the phenomenology of large-scale structure formation can be mapped to problems associated with the first crossing distribution of appropriately defined barriers by random walks. Much of this is summarized in R K ...

  15. Solving large scale crew scheduling problems by using iterative partitioning

    NARCIS (Netherlands)

    E.J.W. Abbink (Erwin)

    2008-01-01

    textabstractThis paper deals with large-scale crew scheduling problems arising at the Dutch railway operator, Netherlands Railways (NS). NS operates about 30,000 trains a week. All these trains need a driver and a certain number of conductors. No available crew scheduling algorithm can solve such

  16. Solving Large Scale Crew Scheduling Problems in Practice

    NARCIS (Netherlands)

    E.J.W. Abbink (Erwin); L. Albino; T.A.B. Dollevoet (Twan); D. Huisman (Dennis); J. Roussado; R.L. Saldanha

    2010-01-01

    textabstractThis paper deals with large-scale crew scheduling problems arising at the Dutch railway operator, Netherlands Railways (NS). NS operates about 30,000 trains a week. All these trains need a driver and a certain number of guards. Some labor rules restrict the duties of a certain crew base

  17. The Large Scale Magnetic Field and Sunspot Cycles

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 161 162. The Large Scale Magnetic Field and Sunspot Cycles. V. I. Makarov* & A. G. Tlatov, Kislovodsk Solar Station of the Pulkovo Observatory,. Kislovodsk 357700, P.O. Box 145, Russia. *e mail: makarov@gao.spb.ru. Key words. Sun: magnetic field—sunspots—solar cycle. Extended abstract.

  18. Large-Scale Networked Virtual Environments: Architecture and Applications

    Science.gov (United States)

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  19. Information Tailoring Enhancements for Large-Scale Social Data

    Science.gov (United States)

    2016-09-26

    improved usability and navigation, (iii) improved the computational framework of Scraawl, (iv) enhanced Named Entity Recognition (NER), and (v...tailoring, large-scale analysis, OSINT 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER a. REPORT b. ABSTRACT c. THIS PAGE ABSTRACT OF PAGES...Improvements .................................................... 7 2.3 Upgrade Scraawl Computational Framework to Increase Robustness ....... 8 2.4

  20. Large-Scale Assessments and Educational Policies in Italy

    Science.gov (United States)

    Damiani, Valeria

    2016-01-01

    Despite Italy's extensive participation in most large-scale assessments, their actual influence on Italian educational policies is less easy to identify. The present contribution aims at highlighting and explaining reasons for the weak and often inconsistent relationship between international surveys and policy-making processes in Italy.…

  1. Large Scale Magnetic Fields: Density Power Spectrum in Redshift ...

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... Our analysis shows that if these magnetic fields originated in the early universe then it is possible to construct models for which the shape of the power spectrum agrees with the large scale slope of the observed power spectrum. However requiring compatibility with observed CMBR anisotropies, the ...

  2. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    lien from Harish-Chandra. Research Institute,. Allahabad. Areas of his interest include cosmic microwave background radiation, large scale structures in the Universe and application of fractals in these. A K Mittal and T R Seshadri. During the last decade it has been argued by some investigators that the distribution of galax-.

  3. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  4. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  5. Large scale synthesis and characterization of Ni nanoparticles by ...

    Indian Academy of Sciences (India)

    ... Refresher Courses · Symposia · Live Streaming. Home; Journals; Bulletin of Materials Science; Volume 31; Issue 1. Large scale synthesis and characterization of Ni nanoparticles by solution reduction method. Huazhi Wang Xinli Kou Jie Zhang Jiangong Li. Nanomaterials Volume 31 Issue 1 February 2008 pp 97-100 ...

  6. Large-Scale Systems Control Design via LMI Optimization

    Czech Academy of Sciences Publication Activity Database

    Rehák, Branislav

    2015-01-01

    Roč. 44, č. 3 (2015), s. 247-253 ISSN 1392-124X Institutional support: RVO:67985556 Keywords : Combinatorial linear matrix inequalities * large-scale system * decentralized control Subject RIV: BC - Control Systems Theory Impact factor: 0.633, year: 2015

  7. Optimization of FTA technology for large scale plant DNA isolation ...

    African Journals Online (AJOL)

    GRACE

    2006-05-02

    May 2, 2006 ... Key words: FTATM, large-scale, DNA sampling, field set up, marker assisted selection. ... application. FTATM classic card (Whatman Inc., Clifton,. NJ) is a whatman paper that has been impregnated with a patented chemical formulation that lyses cells, .... bands for both normal agarose (data not shown) and.

  8. Firebrands and spotting ignition in large-scale fires

    Science.gov (United States)

    Eunmo Koo; Patrick J. Pagni; David R. Weise; John P. Woycheese

    2010-01-01

    Spotting ignition by lofted firebrands is a significant mechanism of fire spread, as observed in many largescale fires. The role of firebrands in fire propagation and the important parameters involved in spot fire development are studied. Historical large-scale fires, including wind-driven urban and wildland conflagrations and post-earthquake fires are given as...

  9. Magnesium Diboride Superconducting Coils for Electric Propulsion Systems for Large Aircraft, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — For electric propulsion systems for large aircraft it is desirable to have very light weight electric motors. Cryogenic motors offer much lighter weight than...

  10. Large scale scenario analysis of future low carbon energy options

    International Nuclear Information System (INIS)

    Olaleye, Olaitan; Baker, Erin

    2015-01-01

    In this study, we use a multi-model framework to examine a set of possible future energy scenarios resulting from R&D investments in Solar, Nuclear, Carbon Capture and Storage (CCS), Bio-fuels, Bio-electricity, and Batteries for Electric Transportation. Based on a global scenario analysis, we examine the impact on the economy of advancement in energy technologies, considering both individual technologies and the interactions between pairs of technologies, with a focus on the role of uncertainty. Nuclear and CCS have the most impact on abatement costs, with CCS mostly important at high levels of abatement. We show that CCS and Bio-electricity are complements, while most of the other energy technology pairs are substitutes. We also examine for stochastic dominance between R&D portfolios: given the uncertainty in R&D outcomes, we examine which portfolios would be preferred by all decision-makers, regardless of their attitude toward risk. We observe that portfolios with CCS tend to stochastically dominate those without CCS; and portfolios lacking CCS and Nuclear tend to be stochastically dominated by others. We find that the dominance of CCS becomes even stronger as uncertainty in climate damages increases. Finally, we show that there is significant value in carefully choosing a portfolio, as relatively small portfolios can dominate large portfolios. - Highlights: • We examine future energy scenarios in the face of R&D and climate uncertainty. • We examine the impact of advancement in energy technologies and pairs of technologies. • CCS complements Bio-electricity while most technology pairs are substitutes. • R&D portfolios without CCS are stochastically dominated by portfolios with CCS. • Higher damage uncertainty favors R&D development of CCS and Bio-electricity

  11. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  12. The Large-scale Effect of Environment on Galactic Conformity

    Science.gov (United States)

    Sun, Shuangpeng; Guo, Qi; Wang, Lan; Wang, Jie; Gao, Liang; Lacey, Cedric G.; Pan, Jun

    2018-04-01

    We use a volume-limited galaxy sample from the SDSS Data Release 7 to explore the dependence of galactic conformity on the large-scale environment, measured on ˜ 4 Mpc scales. We find that the star formation activity of neighbour galaxies depends more strongly on the environment than on the activity of their primary galaxies. In under-dense regions most neighbour galaxies tend to be active, while in over-dense regions neighbour galaxies are mostly passive, regardless of the activity of their primary galaxies. At a given stellar mass, passive primary galaxies reside in higher density regions than active primary galaxies, leading to the apparently strong conformity signal. The dependence of the activity of neighbour galaxies on environment can be explained by the corresponding dependence of the fraction of satellite galaxies. Similar results are found for galaxies in a semi-analytical model, suggesting that no new physics is required to explain the observed large-scale conformity.

  13. Reliability Evaluation considering Structures of a Large Scale Wind Farm

    DEFF Research Database (Denmark)

    Shin, Je-Seok; Cha, Seung-Tae; Wu, Qiuwei

    2012-01-01

    evaluation on wind farm is necessarily required. Also, because large scale offshore wind farm has a long repair time and a high repair cost as well as a high investment cost, it is essential to take into account the economic aspect. One of methods to efficiently build and to operate wind farm is to construct......Wind energy is one of the most widely used renewable energy resources. Wind power has been connected to the grid as large scale wind farm which is made up of dozens of wind turbines, and the scale of wind farm is more increased recently. Due to intermittent and variable wind source, reliability...... wind farm which is able to enhance a capability of delivering a power instead of controlling an uncontrollable output of wind power. Therefore, this paper introduces a method to evaluate the reliability depending upon structures of wind farm and to reflect the result to the planning stage of wind farm....

  14. Electric Generators and their Control for Large Wind Turbines

    DEFF Research Database (Denmark)

    Boldea, Ion; Tutelea, Lucian; Rallabandi, Vandana

    2017-01-01

    The electric generator and its power electronics interface for wind turbines (WTs) have evolved rapidly toward higher reliability and reduced cost of energy in the last 40 years. This chapter describes the up-to-date electric generators existing in the wind power industry, namely, the doubly fed...

  15. Large-scale motions in the universe: a review

    International Nuclear Information System (INIS)

    Burstein, D.

    1990-01-01

    The expansion of the universe can be retarded in localised regions within the universe both by the presence of gravity and by non-gravitational motions generated in the post-recombination universe. The motions of galaxies thus generated are called 'peculiar motions', and the amplitudes, size scales and coherence of these peculiar motions are among the most direct records of the structure of the universe. As such, measurements of these properties of the present-day universe provide some of the severest tests of cosmological theories. This is a review of the current evidence for large-scale motions of galaxies out to a distance of ∼5000 km s -1 (in an expanding universe, distance is proportional to radial velocity). 'Large-scale' in this context refers to motions that are correlated over size scales larger than the typical sizes of groups of galaxies, up to and including the size of the volume surveyed. To orient the reader into this relatively new field of study, a short modern history is given together with an explanation of the terminology. Careful consideration is given to the data used to measure the distances, and hence the peculiar motions, of galaxies. The evidence for large-scale motions is presented in a graphical fashion, using only the most reliable data for galaxies spanning a wide range in optical properties and over the complete range of galactic environments. The kinds of systematic errors that can affect this analysis are discussed, and the reliability of these motions is assessed. The predictions of two models of large-scale motion are compared to the observations, and special emphasis is placed on those motions in which our own Galaxy directly partakes. (author)

  16. Scaling theory of electric-field-assisted tunnelling

    Science.gov (United States)

    Michaels, Thomas C. T.; Cabrera, H.; Zanin, D. A.; De Pietro, L.; Ramsperger, U.; Vindigni, A.; Pescia, D.

    2014-01-01

    Recent experiments report the current (I) versus voltage (V) characteristics of a tunnel junction consisting of a metallic tip placed at a distance d from a planar electrode, d varying over six orders of magnitude, from few nanometres to few millimetres. In the ‘electric-field-assisted’ (or ‘field emission’) regime, as opposed to the direct tunnelling regime used in conventional scanning tunnelling microscopy, all I–V curves are found to collapse onto one single graph when d is suitably rescaled, suggesting that the current I=I(V,d) is in reality a generalized homogeneous function of one single variable, i.e. I=I(V⋅d−λ), where λ being some characteristic exponent and I(x) being a scaling function. In this paper, we provide a comprehensive explanation—based on analytical arguments, numerical simulations and further experimental results—for the scaling behaviour that we show to emerge for a variety of tip–plane geometries and thus seems to be a general feature of electric-field-assisted tunnelling. PMID:25002824

  17. Random access in large-scale DNA data storage.

    Science.gov (United States)

    Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin

    2018-03-01

    Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.

  18. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    Science.gov (United States)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4Destiny will be used in its third year as a high resolution, wide-field imager to conduct a weak lensing survey covering >lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  19. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto

    2018-01-04

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  20. Updating Geospatial Data from Large Scale Data Sources

    Science.gov (United States)

    Zhao, R.; Chen, J.; Wang, D.; Shang, Y.; Wang, Z.; Li, X.; Ai, T.

    2011-08-01

    In the past decades, many geospatial databases have been established at national, regional and municipal levels over the world. Nowadays, it has been widely recognized that how to update these established geo-spatial database and keep them up to date is most critical for the value of geo-spatial database. So, more and more efforts have been devoted to the continuous updating of these geospatial databases. Currently, there exist two main types of methods for Geo-spatial database updating: directly updating with remote sensing images or field surveying materials, and indirectly updating with other updated data result such as larger scale newly updated data. The former method is the basis because the update data sources in the two methods finally root from field surveying and remote sensing. The later method is often more economical and faster than the former. Therefore, after the larger scale database is updated, the smaller scale database should be updated correspondingly in order to keep the consistency of multi-scale geo-spatial database. In this situation, it is very reasonable to apply map generalization technology into the process of geo-spatial database updating. The latter is recognized as one of most promising methods of geo-spatial database updating, especially in collaborative updating environment in terms of map scale, i.e , different scale database are produced and maintained separately by different level organizations such as in China. This paper is focused on applying digital map generalization into the updating of geo-spatial database from large scale in the collaborative updating environment for SDI. The requirements of the application of map generalization into spatial database updating are analyzed firstly. A brief review on geospatial data updating based digital map generalization is then given. Based on the requirements analysis and review, we analyze the key factors for implementing updating geospatial data from large scale including technical

  1. Control for large scale demand response of thermostatic loads

    DEFF Research Database (Denmark)

    Totu, Luminita Cristiana; Leth, John; Wisniewski, Rafal

    2013-01-01

    Demand response is an important Smart Grid concept that aims at facilitating the integration of volatile energy resources into the electricity grid. This paper considers a residential demand response scenario and specifically looks into the problem of managing a large number thermostatbased...... appliances with on/off operation. The objective is to reduce the consumption peak of a group of loads composed of both flexible and inflexible units. The power flexible units are the thermostat-based appliances. We discuss a centralized, model predictive approach and a distributed structure with a randomized...

  2. Electric Control Substituting Pitch Control for Large Wind Turbines

    Directory of Open Access Journals (Sweden)

    Jon Kjellin

    2013-01-01

    turbine has fixed pitch and is only controlled electrically accommodated by passive stall of the blades. By electrically controlling the generator rotational speed with the inverter, passive stall regulation is enabled. The first results on experimental verification of stall regulation in gusty wind speeds are presented. The experiments show that the control system can keep the turbine rotational speed constant even at very gusty winds. It is concluded that electrical control accommodated by passive stall is sufficient as control of the wind turbine even at high wind speeds and can substitute mechanical control such as blade pitch.

  3. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  4. Primordial quantum nonequilibrium and large-scale cosmic anomalies

    Science.gov (United States)

    Colin, Samuel; Valentini, Antony

    2015-08-01

    We study incomplete relaxation to quantum equilibrium at long wavelengths, during a preinflationary phase, as a possible explanation for the reported large-scale anomalies in the cosmic microwave background. Our scenario makes use of the de Broglie-Bohm pilot-wave formulation of quantum theory, in which the Born probability rule has a dynamical origin. The large-scale power deficit could arise from incomplete relaxation for the amplitudes of the primordial perturbations. We show, by numerical simulations for a spectator scalar field, that if the preinflationary era is radiation dominated then the deficit in the emerging power spectrum will have a characteristic shape (an inverse-tangent dependence on wave number k , with oscillations). It is found that our scenario is able to produce a power deficit in the observed region and of the observed (approximate) magnitude for an appropriate choice of cosmological parameters. We also discuss the large-scale anisotropy, which might arise from incomplete relaxation for the phases of the primordial perturbations. We present numerical simulations for phase relaxation, and we show how to define characteristic scales for amplitude and phase nonequilibrium. The extent to which the data might support our scenario is left as a question for future work. Our results suggest that we have a potentially viable model that might explain two apparently independent cosmic anomalies by means of a single mechanism.

  5. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  6. Large Scale Emerging Properties from Non Hamiltonian Complex Systems

    Directory of Open Access Journals (Sweden)

    Marco Bianucci

    2017-06-01

    Full Text Available The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO.

  7. Solving large scale structure in ten easy steps with COLA

    International Nuclear Information System (INIS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-01-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10 9 M s un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10 11 M s un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed

  8. Scaled-energy spectroscopy of argon atoms in an electric field

    Energy Technology Data Exchange (ETDEWEB)

    Keeler, M L; Flores-Rueda, H; Wright, J D; Morgan, T J [Physics Department, Wesleyan University, Middletown, CT 06457 (United States)

    2004-02-28

    We have measured the scaled-energy absorption spectra of argon Rydberg states in a uniform electric field. The experiment utilized laser spectroscopy from the metastable 4s[3/2]{sub 2} state in a fast atom beam. The scaled absorption spectra are Fourier transformed to obtain recurrence spectra that allow semiclassical analysis. Argon recurrence spectra reveal bias towards populating downhill-oriented classical trajectories and show large-scale modulations. A quantum calculation indicates that the asymmetric trajectory distribution is caused by the large p-quantum defect and that d-f excitation from the initial metastable state is influential in shaping the observed oscillations in the recurrence maps. The analysis serves to highlight the correspondence between quantum oscillator strengths and classical orbits.

  9. Robust regression for large-scale neuroimaging studies.

    Science.gov (United States)

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Large-scale integration of wind power into different energy systems

    DEFF Research Database (Denmark)

    Lund, Henrik

    2005-01-01

    The paper presents the ability of different energy systems and regulation strategies to integrate wind power. The ability is expressed by the following three factors: the degree of electricity excess production caused by fluctuations in wind and Combined Heat and Power (CHP) heat demands......, the ability to utilise wind power to reduce CO2 emission in the system, and the ability to benefit from exchange of electricity on the market. Energy systems and regulation strategies are analysed in the range of a wind power input from 0 to 100% of the electricity demand. Based on the Danish energy system...... of the analyses make it possible to compare short-term and long-term potentials of different strategies of large-scale integration of wind power....

  11. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  12. Less is more: regularization perspectives on large scale machine learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  13. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  14. Effects of fluid flow on heat transfer in large rotating electrical machines

    International Nuclear Information System (INIS)

    Lancial, Nicolas

    2014-01-01

    EDF operates a large number of electrical rotating machines in its electricity generation capacity. Thermal stresses which affect them can cause local heating, sufficient to damage their integrity. The present work contributes to provide methodologies for detecting hot spots in these machines, better understanding the topology of rotating flows and identifying their effects on heat transfer. Several experimental scale model were used by increasing their complexity to understand and validate the numerical simulations. A first study on a turbulent wall jet over a non-confined backward-facing step (half-pole hydro-generator) notes significant differences compared to results from confined case: both of them are present in an hydro-generator. A second study was done on a small confined rotating scale model to determinate the effects of a Taylor-Couette-Poiseuille on temperature distribution and position of hot spots on the heated rotor, by studying the overall flow regimes flow. These studies have helped to obtain a reliable method based on conjugate heat transfer (CHT) simulations. Another method, based on FEM coupled with the use of an inverse method, has been studied on a large model of hydraulic generator so as to solve the computation time issue of the first methodology. It numerically calculates the convective heat transfer from temperature measurements, but depends on the availability of experimental data. This work has also developed new no-contact measurement techniques as the use of a high-frequency pyrometer which can be applied on rotating machines for monitoring temperature. (author)

  15. Prospects for investment in large-scale, grid-connected solar power in Africa

    DEFF Research Database (Denmark)

    Hansen, Ulrich Elmer; Nygaard, Ivan; Pedersen, Mathilde Brix

    Solar power in Africa is on its way to becoming a market-based commodity, thus escaping the niche for individual electricity supply that is mainly supported by international donor organisations. Significant reductions in the cost of photovoltaic (PV) panels and a 400 percent increase in oil prices......-scale investments in grid-connected solar power plants and local assembly facilities for PV panels, have exceeded even optimistic scenarios. Finally, therefore, there seem to be bright prospects for investment in large-scale grid-connected solar power in Africa....... since the 1990s have changed the competiveness of solar PV in all markets, ranging from individual households via institutions to mini-grids and grid-connected installations. In volume and investment, the market for large-scale grid-connected solar power plants is by far the most important...

  16. Large-scale synthesis of copper sulfide by using elemental sources via simple chemical route.

    Science.gov (United States)

    Mulla, Rafiq; Rabinal, M K

    2017-11-01

    Copper sulfide is a low-cost and non-toxic material which is very attractive and promising for various applications. There is a need of a large-scale production of this material by simple methods. Here, a simple and ambient method is proposed for a large-scale preparation of copper sulfide. The synthesis is carried out at room temperature by using ultrasonication method where the elemental precursors, copper and sulfur are directly used. The present method gives gram scale synthesis with high yield in a short period of time. The materials are characterized by different techniques, their electrical conductivity and Seebeck coefficient are also measured and analyzed. The present method is one of the simple ways of producing copper sulfide just at room temperature. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Contribution to the study of scaling mechanisms. Application to electric anti-scaling apparatus

    International Nuclear Information System (INIS)

    Le Duigou, Alain

    1982-01-01

    In order to precisely study scaling mechanisms, this research thesis first aims at a deeper understanding of natural waters and their equilibriums by developing the Legrand and Poirier graphical method, and then studies the conditions for obtaining deposited products by electrolytic way on metal substrates the surface condition of which allows a better monitoring of the first stages of the phenomenon. The author also addresses the determination of an operating principle for electrical anti-scaling systems, and the development of a test method for the assessment of their efficiency. The author also identifies some rules allowing this efficiency to be improved in the case of natural waters

  18. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  19. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  20. The economics and environmental impacts of large-scale wind power in a carbon constrained world

    Science.gov (United States)

    Decarolis, Joseph Frank

    Serious climate change mitigation aimed at stabilizing atmospheric concentrations of CO2 will require a radical shift to a decarbonized energy supply. The electric power sector will be a primary target for deep reductions in CO2 emissions because electric power plants are among the largest and most manageable point sources of emissions. With respect to new capacity, wind power is currently one of the most inexpensive ways to produce electricity without CO2 emissions and it may have a significant role to play in a carbon constrained world. Yet most research in the wind industry remains focused on near term issues, while energy system models that focus on century-long time horizons undervalue wind by imposing exogenous limits on growth. This thesis fills a critical gap in the literature by taking a closer look at the cost and environmental impacts of large-scale wind. Estimates of the average cost of wind generation---now roughly 4¢/kWh---do not address the cons arising from the spatial distribution and intermittency of wind. This thesis develops a theoretical framework for assessing the intermittency cost of wind. In addition, an economic characterization of a wind system is provided in which long-distance electricity transmission, storage, and gas turbines are used to supplement variable wind power output to meet a time-varying load. With somewhat optimistic assumptions about the cost of wind turbines, the use of wind to serve 50% of demand adds ˜1--2¢/kWh to the cost of electricity, a cost comparable to that of other large-scale low carbon technologies. This thesis also explores the environmental impacts posed by large-scale wind. Though avian mortality and noise caused controversy in the early years of wind development, improved technology and exhaustive siting assessments have minimized their impact. The aesthetic valuation of wind farms can be improved significantly with better design, siting, construction, and maintenance procedures, but opposition may

  1. Large scale EMF in current sheets induced by tearing modes

    Science.gov (United States)

    Mizerski, Krzysztof A.

    2018-02-01

    An extension of the analysis of resistive instabilities of a sheet pinch from a famous work by Furth et al (1963 Phys. Fluids 6 459) is presented here, to study the mean electromotive force (EMF) generated by the developing instability. In a Cartesian configuration and in the presence of a current sheet first the boundary layer technique is used to obtain global, matched asymptotic solutions for the velocity and magnetic field and then the solutions are used to calculate the large-scale EMF in the system. It is reported, that in the bulk the curl of the mean EMF is linear in {{j}}0\\cdot {{B}}0, a simple pseudo-scalar quantity constructed from the large-scale quantities.

  2. Mathematical programming methods for large-scale topology optimization problems

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana

    for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...... structure of the problem. In both solvers, information of the exact Hessian is considered. A robust iterative method is implemented to efficiently solve large-scale linear systems. Both TopSQP and TopIP have successful results in terms of convergence, number of iterations, and objective function values....... Thanks to the use of the iterative method implemented, TopIP is able to solve large-scale problems with more than three millions degrees of freedom....

  3. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  4. Volume measurement study for large scale input accountancy tank

    International Nuclear Information System (INIS)

    Uchikoshi, Seiji; Watanabe, Yuichi; Tsujino, Takeshi

    1999-01-01

    Large Scale Tank Calibration (LASTAC) facility, including an experimental tank which has the same volume and structure as the input accountancy tank of Rokkasho Reprocessing Plant (RRP) was constructed in Nuclear Material Control Center of Japan. Demonstration experiments have been carried out to evaluate a precision of solution volume measurement and to establish the procedure of highly accurate pressure measurement for a large scale tank with dip-tube bubbler probe system to be applied to the input accountancy tank of RRP. Solution volume in a tank is determined from substitution the solution level for the calibration function obtained in advance, which express a relation between the solution level and its volume in the tank. Therefore, precise solution volume measurement needs a precise calibration function that is determined carefully. The LASTAC calibration experiments using pure water showed good result in reproducibility. (J.P.N.)

  5. Model for large scale circulation of nuclides in nature, 1

    Energy Technology Data Exchange (ETDEWEB)

    Ohnishi, Teruaki

    1988-12-01

    A model for large scale circulation of nuclides was developed, and a computer code named COCAIN was made which simulates this circulation system-dynamically. The natural environment considered in the present paper consists of 2 atmospheres, 8 geospheres and 2 lithospheres. The biosphere is composed of 4 types of edible plants, 5 cattles and their products, 4 water biota and 16 human organs. The biosphere is assumed to be given nuclides from the natural environment mentioned above. With the use of COCAIN, two numerical case studies were carried out; the one is the study on nuclear pollution in nature by the radioactive nuclides originating from the past nuclear bomb tests, and the other is the study on the response of environment and biota to the pulse injection of nuclides into one compartment. From the former case study it was verified that this model can well explain the observation and properly simulate the large scale circulation of nuclides in nature.

  6. Performance of Grey Wolf Optimizer on large scale problems

    Science.gov (United States)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  7. Energy-efficient electrical machines by new materials. Superconductivity in large electrical machines

    International Nuclear Information System (INIS)

    Frauenhofer, Joachim; Arndt, Tabea; Grundmann, Joern

    2013-01-01

    The implementation of superconducting materials in high-power electrical machines results in significant advantages regarding efficiency, size and dynamic behavior when compared to conventional machines. The application of HTS (high-temperature superconductors) in electrical machines allows significantly higher power densities to be achieved for synchronous machines. In order to gain experience with the new technology, Siemens carried out a series of development projects. A 400 kW model motor for the verification of a concept for the new technology was followed by a 4000 kV A generator as highspeed machine - as well as a low-speed 4000 kW propeller motor with high torque. The 4000 kVA generator is still employed to carry out long-term tests and to check components. Superconducting machines have significantly lower weight and envelope dimensions compared to conventional machines, and for this reason alone, they utilize resources better. At the same time, operating losses are slashed to about half and the efficiency increases. Beyond this, they set themselves apart as a result of their special features in operation, such as high overload capability, stiff alternating load behavior and low noise. HTS machines provide significant advantages where the reduction of footprint, weight and losses or the improved dynamic behavior results in significant improvements of the overall system. Propeller motors and generators,for ships, offshore plants, in wind turbine and hydroelectric plants and in large power stations are just some examples. HTS machines can therefore play a significant role when it comes to efficiently using resources and energy as well as reducing the CO 2 emissions.

  8. Investigation of the large scale regional hydrogeological situation at Ceberg

    International Nuclear Information System (INIS)

    Boghammar, A.; Grundfelt, B.; Hartley, L.

    1997-11-01

    The present study forms part of the large-scale groundwater flow studies within the SR 97 project. The site of interest is Ceberg. Within the present study two different regional scale groundwater models have been constructed, one large regional model with an areal extent of about 300 km 2 and one semi-regional model with an areal extent of about 50 km 2 . Different types of boundary conditions have been applied to the models. Topography driven pressures, constant infiltration rates, non-linear infiltration combined specified pressure boundary conditions, and transfer of groundwater pressures from the larger model to the semi-regional model. The present model has shown that: -Groundwater flow paths are mainly local. Large-scale groundwater flow paths are only seen below the depth of the hypothetical repository (below 500 meters) and are very slow. -Locations of recharge and discharge, to and from the site area are in the close vicinity of the site. -The low contrast between major structures and the rock mass means that the factor having the major effect on the flowpaths is the topography. -A sufficiently large model, to incorporate the recharge and discharge areas to the local site is in the order of kilometres. -A uniform infiltration rate boundary condition does not give a good representation of the groundwater movements in the model. -A local site model may be located to cover the site area and a few kilometers of the surrounding region. In order to incorporate all recharge and discharge areas within the site model, the model will be somewhat larger than site scale models at other sites. This is caused by the fact that the discharge areas are divided into three distinct areas to the east, south and west of the site. -Boundary conditions may be supplied to the site model by means of transferring groundwater pressures obtained with the semi-regional model

  9. System Recovery in Large-Scale Distributed Storage Systems

    OpenAIRE

    Aga, Svein

    2008-01-01

    This report aims to describe and improve a system recovery process in large-scale storage systems. Inevitable, a recovery process results in the system being loaded with internal replication of data, and will extensively utilize several storage nodes. Such internal load can be categorized and generalized into a maintenance workload class. Obviously, a storage system will have external clients which also introduce load into the system. This can be users altering their data, uploading new cont...

  10. Foundations of Large-Scale Multimedia Information Management and Retrieval

    CERN Document Server

    Chang, Edward Y

    2011-01-01

    "Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and

  11. Large-Scale Physical Separation of Depleted Uranium from Soil

    Science.gov (United States)

    2012-09-01

    unweathered depleted uranium rods illustrating the formation of uranyl oxides and salts . Unfired penetrator rods can range from 10 to 50 cm in length...specific area ratio (as thin sections, fine particles, or molten states). Uranium in finely divided form is prone to ignition. Uranium also has an...ER D C/ EL T R -1 2 -2 5 Army Range Technology Program Large-Scale Physical Separation of Depleted Uranium from Soil E nv ir on m en ta l

  12. NASA: Assessments of Selected Large-Scale Projects

    Science.gov (United States)

    2011-03-01

    Selected Large-Scale Projects Common Name: Orion Project Update The President proposed cancellation of the Constellation Program, including the Orion ...fiscal year 2010. NASA remains poised to leverage Constellation assets to contribute to future exploration beyond low-Earth orbit. Orion Crew...Observatory 2 (OCO-2) 65 Orion Crew Exploration Vehicle 67 Radiation Belt Storm Probes (RBSP) 69 Soil Moisture Active and Passive (SMAP) 71

  13. Large scale 2D spectral compressed sensing in continuous domain

    KAUST Repository

    Cai, Jian-Feng

    2017-06-20

    We consider the problem of spectral compressed sensing in continuous domain, which aims to recover a 2-dimensional spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500 × 500, whereas traditional approaches only handle signals of size around 20 × 20.

  14. How Large-Scale Research Facilities Connect to Global Research

    DEFF Research Database (Denmark)

    Lauto, Giancarlo; Valentin, Finn

    2013-01-01

    Policies for large-scale research facilities (LSRFs) often highlight their spillovers to industrial innovation and their contribution to the external connectivity of the regional innovation system hosting them. Arguably, the particular institutional features of LSRFs are conducive for collaborati...... with domestic universities or government laboratories. Policies conceiving LSRFs as “knowledge attractors” therefore should consider the complementarities between research at a LSRF and in its academic context at a regional or national level....

  15. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,

    Science.gov (United States)

    1985-10-07

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL

  16. Large-scale prediction of drug-target relationships

    DEFF Research Database (Denmark)

    Kuhn, Michael; Campillos, Mónica; González, Paula

    2008-01-01

    , but also provides a more global view on drug-target relations. Here we review recent attempts to apply large-scale computational analyses to predict novel interactions of drugs and targets from molecular and cellular features. In this context, we quantify the family-dependent probability of two proteins...... to bind the same ligand as function of their sequence similarity. We finally discuss how phenotypic data could help to expand our understanding of the complex mechanisms of drug action....

  17. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  18. Design study on sodium cooled large-scale reactor

    International Nuclear Information System (INIS)

    Murakami, Tsutomu; Hishida, Masahiko; Kisohara, Naoyuki

    2004-07-01

    In Phase 1 of the 'Feasibility Studies on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2, design improvement for further cost reduction of establishment of the plant concept has been performed. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2003, which is the third year of Phase 2. In the JFY2003 design study, critical subjects related to safety, structural integrity and thermal hydraulics which found in the last fiscal year has been examined and the plant concept has been modified. Furthermore, fundamental specifications of main systems and components have been set and economy has been evaluated. In addition, as the interim evaluation of the candidate concept of the FBR fuel cycle is to be conducted, cost effectiveness and achievability for the development goal were evaluated and the data of the three large-scale reactor candidate concepts were prepared. As a results of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000 yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  19. Exploring the technical challenges of large-scale lifelogging

    OpenAIRE

    Gurrin, Cathal; Smeaton, Alan F.; Qiu, Zhengwei; Doherty, Aiden R.

    2013-01-01

    Ambiently and automatically maintaining a lifelog is an activity that may help individuals track their lifestyle, learning, health and productivity. In this paper we motivate and discuss the technical challenges of developing real-world lifelogging solutions, based on seven years of experience. The gathering, organisation, retrieval and presentation challenges of large-scale lifelogging are dis- cussed and we show how this can be achieved and the benefits that may accrue.

  20. Accuracy control in ultra-large-scale electronic structure calculation

    OpenAIRE

    Hoshi, Takeo

    2007-01-01

    Numerical aspects are investigated in ultra-large-scale electronic structure calculation. Accuracy control methods in process (molecular-dynamics) calculation are focused. Flexible control methods are proposed so as to control variational freedoms, automatically at each time step, within the framework of generalized Wannier state theory. The method is demonstrated in silicon cleavage simulation with 10^2-10^5 atoms. The idea is of general importance among process calculations and is also used...

  1. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    Science.gov (United States)

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse

  2. Domain nesting for multi-scale large eddy simulation

    Science.gov (United States)

    Fuka, Vladimir; Xie, Zheng-Tong

    2016-04-01

    The need to simulate city scale areas (O(10 km)) with high resolution within street canyons in certain areas of interests necessitates different grid resolutions in different part of the simulated area. General purpose computational fluid dynamics codes typically employ unstructured refined grids while mesoscale meteorological models more often employ nesting of computational domains. ELMM is a large eddy simulation model for the atmospheric boundary layer. It employs orthogonal uniform grids and for this reason domain nesting was chosen as the approach for simulations in multiple scales. Domains are implemented as sets of MPI processes which communicate with each other as in a normal non-nested run, but also with processes from another (outer/inner) domain. It should stressed that the duration of solution of time-steps in the outer and in the inner domain must be synchronized, so that the processes do not have to wait for the completion of their boundary conditions. This can achieved by assigning an appropriate number of CPUs to each domain, and to gain high efficiency. When nesting is applied for large eddy simulation, the inner domain receives inflow boundary conditions which lack turbulent motions not represented by the outer grid. ELMM remedies this by optional adding of turbulent fluctuations to the inflow using the efficient method of Xie and Castro (2008). The spatial scale of these fluctuations is in the subgrid-scale of the outer grid and their intensity will be estimated from the subgrid turbulent kinetic energy in the outer grid.

  3. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    Science.gov (United States)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dünner, R.; Essinger-Hileman, T.; Eimer, J.; Fluxa, P.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G.; Hubmayr, J.; Iuliano, J.; Marriage, T. A.; Miller, N.; Moseley, S. H.; Mumby, G.; Petroff, M.; Reintsema, C.; Rostem, K.; U-Yen, K.; Watts, D.; Wagner, E.; Wollack, E. J.; Xu, Z.; Zeng, L.

    2016-08-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe ˜ 70 % of the sky. A variable-delay polarization modulator provides modulation of the polarization at ˜ 10 Hz to suppress the 1/ f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  4. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    Science.gov (United States)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dunner, R.; Essinger-Hileman, T.; Eimer, J.; hide

    2015-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe approx.70% of the sky. A variable-delay polarization modulator provides modulation of the polarization at approx.10Hz to suppress the 1/f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  5. DEMNUni: massive neutrinos and the bispectrum of large scale structures

    Science.gov (United States)

    Ruggeri, Rossana; Castorina, Emanuele; Carbone, Carmelita; Sefusatti, Emiliano

    2018-03-01

    The main effect of massive neutrinos on the large-scale structure consists in a few percent suppression of matter perturbations on all scales below their free-streaming scale. Such effect is of particular importance as it allows to constraint the value of the sum of neutrino masses from measurements of the galaxy power spectrum. In this work, we present the first measurements of the next higher-order correlation function, the bispectrum, from N-body simulations that include massive neutrinos as particles. This is the simplest statistics characterising the non-Gaussian properties of the matter and dark matter halos distributions. We investigate, in the first place, the suppression due to massive neutrinos on the matter bispectrum, comparing our measurements with the simplest perturbation theory predictions, finding the approximation of neutrinos contributing at quadratic order in perturbation theory to provide a good fit to the measurements in the simulations. On the other hand, as expected, a linear approximation for neutrino perturbations would lead to Script O(fν) errors on the total matter bispectrum at large scales. We then attempt an extension of previous results on the universality of linear halo bias in neutrino cosmologies, to non-linear and non-local corrections finding consistent results with the power spectrum analysis.

  6. Ecohydrological modeling for large-scale environmental impact assessment.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  8. Learning Short Binary Codes for Large-scale Image Retrieval.

    Science.gov (United States)

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  9. Survey of large-scale isotope applications: nuclear technology field

    Energy Technology Data Exchange (ETDEWEB)

    Dewitt, R.

    1977-01-21

    A preliminary literature survey of potential large-scale isotope applications was made according to topical fields; i.e., nuclear, biological, medical, environmental, agricultural, geological, and industrial. Other than the possible expansion of established large-scale isotope applications such as uranium, boron, lithium, and hydrogen, no new immediate isotope usage appears to be developing. Over the long term a change in emphasis for isotope applications was identified which appears to be more responsive to societal concerns for health, the environment, and the conservation of materials and energy. For gram-scale applications, a variety of isotopes may be required for use as nonradioactive ''activable'' tracers. A more detailed survey of the nuclear field identified a potential need for large amounts (tons) of special isotopic materials for advanced reactor components and structures. At this need for special materials and the development of efficient separation methods progresses, the utilization of isotopes from nuclear wastes for beneficial uses should also progress.

  10. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng

    2013-10-03

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  11. Large-scale impact cratering on the terrestrial planets

    International Nuclear Information System (INIS)

    Grieve, R.A.F.

    1982-01-01

    The crater densities on the earth and moon form the basis for a standard flux-time curve that can be used in dating unsampled planetary surfaces and constraining the temporal history of endogenic geologic processes. Abundant evidence is seen not only that impact cratering was an important surface process in planetary history but also that large imapact events produced effects that were crucial in scale. By way of example, it is noted that the formation of multiring basins on the early moon was as important in defining the planetary tectonic framework as plate tectonics is on the earth. Evidence from several planets suggests that the effects of very-large-scale impacts go beyond the simple formation of an impact structure and serve to localize increased endogenic activity over an extended period of geologic time. Even though no longer occurring with the frequency and magnitude of early solar system history, it is noted that large scale impact events continue to affect the local geology of the planets. 92 references

  12. Origin of the large scale structures of the universe

    International Nuclear Information System (INIS)

    Oaknin, David H.

    2004-01-01

    We revise the statistical properties of the primordial cosmological density anisotropies that, at the time of matter-radiation equality, seeded the gravitational development of large scale structures in the otherwise homogeneous and isotropic Friedmann-Robertson-Walker flat universe. Our analysis shows that random fluctuations of the density field at the same instant of equality and with comoving wavelength shorter than the causal horizon at that time can naturally account, when globally constrained to conserve the total mass (energy) of the system, for the observed scale invariance of the anisotropies over cosmologically large comoving volumes. Statistical systems with similar features are generically known as glasslike or latticelike. Obviously, these conclusions conflict with the widely accepted understanding of the primordial structures reported in the literature, which requires an epoch of inflationary cosmology to precede the standard expansion of the universe. The origin of the conflict must be found in the widespread, but unjustified, claim that scale invariant mass (energy) anisotropies at the instant of equality over comoving volumes of cosmological size, larger than the causal horizon at the time, must be generated by fluctuations in the density field with comparably large comoving wavelength

  13. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  14. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  15. Large Scale Plasmonic nanoCones array For Spectroscopy Detection

    KAUST Repository

    Das, Gobind

    2015-09-24

    Advanced optical materials or interfaces are gaining attention for diagnostic applications. However, the achievement of large device interface as well as facile surface functionalization largely impairs their wide use. The present work is aimed to address different innovative aspects related to the fabrication of large area 3D plasmonic arrays, their direct and easy functionalization with capture elements and their spectroscopic verifications through enhanced Raman and enhanced fluorescence techniques. In detail we have investigated the effect of Au-based nanoCones array, fabricated by means of direct nanoimprint technique over large area (mm2), on protein capturing and on the enhancement in optical signal. A selective functionalization of gold surfaces was proposed by using a peptide (AuPi3) previously selected by phage display. In this regard, two different sequences, labeled with fluorescein and biotin, were chemisorbed on metallic surfaces. The presence of Au nanoCones array consents an enhancement in electric field on the apex of cone, enabling the detection of molecules. We have witnessed around 12-fold increase in fluorescence intensity and SERS enhancement factor around 1.75 ×105 with respect to the flat gold surface. Furthermore, a sharp decrease in fluorescence lifetime over nanoCones confirms the increase in radiative emission (i.e. an increase in photonics density at the apex of cones).

  16. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    Science.gov (United States)

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  17. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...... Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables...

  18. An innovative large scale integration of silicon nanowire-based field effect transistors

    Science.gov (United States)

    Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.

    2018-05-01

    Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.

  19. Large-scale stabilization control of input-constrained quadrotor

    Directory of Open Access Journals (Sweden)

    Jun Jiang

    2016-10-01

    Full Text Available The quadrotor has been the most popular aircraft in the last decade due to its excellent dynamics and continues to attract ever-increasing research interest. Delivering a quadrotor from a large fixed-wing aircraft is a promising application of quadrotors. In such an application, the quadrotor needs to switch from a highly unstable status, featured as large initial states, to a safe and stable flight status. This is the so-called large-scale stability control problem. In such an extreme scenario, the quadrotor is at risk of actuator saturation. This can cause the controller to update incorrectly and lead the quadrotor to spiral and crash. In this article, to safely control the quadrotor in such scenarios, the control input constraint is analyzed. The key states of a quadrotor dynamic model are selected, and a two-dimensional dynamic model is extracted based on a symmetrical body configuration. A generalized point-wise min-norm nonlinear control method is proposed based on the Lyapunov function, and large-scale stability control is hence achieved. An enhanced point-wise, min-norm control is further provided to improve the attitude control performance, with altitude performance degenerating slightly. Simulation results showed that the proposed control methods can stabilize the input-constrained quadrotor and the enhanced method can improve the performance of the quadrotor in critical states.

  20. Remote Sensing Image Classification With Large-Scale Gaussian Processes

    Science.gov (United States)

    Morales-Alvarez, Pablo; Perez-Suay, Adrian; Molina, Rafael; Camps-Valls, Gustau

    2018-02-01

    Current remote sensing image classification problems have to deal with an unprecedented amount of heterogeneous and complex data sources. Upcoming missions will soon provide large data streams that will make land cover/use classification difficult. Machine learning classifiers can help at this, and many methods are currently available. A popular kernel classifier is the Gaussian process classifier (GPC), since it approaches the classification problem with a solid probabilistic treatment, thus yielding confidence intervals for the predictions as well as very competitive results to state-of-the-art neural networks and support vector machines. However, its computational cost is prohibitive for large scale applications, and constitutes the main obstacle precluding wide adoption. This paper tackles this problem by introducing two novel efficient methodologies for Gaussian Process (GP) classification. We first include the standard random Fourier features approximation into GPC, which largely decreases its computational cost and permits large scale remote sensing image classification. In addition, we propose a model which avoids randomly sampling a number of Fourier frequencies, and alternatively learns the optimal ones within a variational Bayes approach. The performance of the proposed methods is illustrated in complex problems of cloud detection from multispectral imagery and infrared sounding data. Excellent empirical results support the proposal in both computational cost and accuracy.

  1. Some ecological guidelines for large-scale biomass plantations

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, W.; Cook, J.H.; Beyea, J. [National Audubon Society, Tavernier, FL (United States)

    1993-12-31

    The National Audubon Society sees biomass as an appropriate and necessary source of energy to help replace fossil fuels in the near future, but is concerned that large-scale biomass plantations could displace significant natural vegetation and wildlife habitat, and reduce national and global biodiversity. We support the development of an industry large enough to provide significant portions of our energy budget, but we see a critical need to ensure that plantations are designed and sited in ways that minimize ecological disruption, or even provide environmental benefits. We have been studying the habitat value of intensively managed short-rotation tree plantations. Our results show that these plantations support large populations of some birds, but not all of the species using the surrounding landscape, and indicate that their value as habitat can be increased greatly by including small areas of mature trees within them. We believe short-rotation plantations can benefit regional biodiversity if they can be deployed as buffers for natural forests, or as corridors connecting forest tracts. To realize these benefits, and to avoid habitat degradation, regional biomass plantation complexes (e.g., the plantations supplying all the fuel for a powerplant) need to be planned, sited, and developed as large-scale units in the context of the regional landscape mosaic.

  2. Foundational perspectives on causality in large-scale brain networks

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  3. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  4. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    Science.gov (United States)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  5. Soil organic carbon - a large scale paired catchment assessment

    Science.gov (United States)

    Kunkel, V.; Hancock, G. R.; Wells, T.

    2016-12-01

    Soil organic carbon (SOC) concentration can vary both spatially and temporally driven by differences in soil properties, topography and climate. However most studies have focused on point scale data sets with a paucity of studies examining larger scale catchments. Here we examine the spatial and temporal distribution of SOC for two large catchments. The Krui (575 km2) and Merriwa River (675km2) catchments (New South Wales, Australia). Both have similar shape, soils, topography and orientation. We show that SOC distribution is very similar for both catchments and that elevation (and associated increase in soil moisture) is a major influence on SOC. We also show that there is little change in SOC from the initial assessment in 2006 to 2015 despite a major drought from 2003 to 2010 and extreme rainfall events in 2007 and 2010 -therefore SOC concentration appears robust. However, we found significant relationships between erosion and deposition patterns (as quantified using 137Cs) and SOC for both catchments again demonstrating a strong geomorphic relationship. Vegetation across the catchments was assessed using remote sensing (Landsat and MODIS). Vegetation patterns were temporally consistent with above ground biomass increasing with elevation. SOC could be predicted using both these low and high resolution remote sensing platforms. Results indicate that, although moderate resolution (250 m) allows for reasonable prediction of the spatial distribution of SOC, the higher resolution (30 m) improved the strength of the SOC-NDVI relationship. The relationship between SOC and 137Cs, as a surrogate for the erosion and deposition of SOC, suggested that sediment transport and deposition influences the distribution of SOC within the catchment. The findings demonstrate that over the large catchment scale and at the decadal time scale that SOC is relatively constant and can largely be predicted by topography.

  6. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  7. UAV Data Processing for Large Scale Topographical Mapping

    Directory of Open Access Journals (Sweden)

    W. Tampubolon

    2014-06-01

    Full Text Available Large scale topographical mapping in the third world countries is really a prominent challenge in geospatial industries nowadays. On one side the demand is significantly increasing while on the other hand it is constrained by limited budgets available for mapping projects. Since the advent of Act Nr.4/yr.2011 about Geospatial Information in Indonesia, large scale topographical mapping has been on high priority for supporting the nationwide development e.g. detail spatial planning. Usually large scale topographical mapping relies on conventional aerial survey campaigns in order to provide high resolution 3D geospatial data sources. Widely growing on a leisure hobby, aero models in form of the so-called Unmanned Aerial Vehicle (UAV bring up alternative semi photogrammetric aerial data acquisition possibilities suitable for relatively small Area of Interest (AOI i.e. <5,000 hectares. For detail spatial planning purposes in Indonesia this area size can be used as a mapping unit since it usually concentrates on the basis of sub district area (kecamatan level. In this paper different camera and processing software systems will be further analyzed for identifying the best optimum UAV data acquisition campaign components in combination with the data processing scheme. The selected AOI is covering the cultural heritage of Borobudur Temple as one of the Seven Wonders of the World. A detailed accuracy assessment will be concentrated within the object feature of the temple at the first place. Feature compilation involving planimetric objects (2D and digital terrain models (3D will be integrated in order to provide Digital Elevation Models (DEM as the main interest of the topographic mapping activity. By doing this research, incorporating the optimum amount of GCPs in the UAV photo data processing will increase the accuracy along with its high resolution in 5 cm Ground Sampling Distance (GSD. Finally this result will be used as the benchmark for alternative

  8. Enhanced electrical conductivity in graphene and boron nitride nanoribbons in large electric fields

    Science.gov (United States)

    Chegel, Raad

    2018-02-01

    Based on data of density function theory (DFT) as the input of tight binding model, the electrical conductivity (σ(T)) of graphene nanoribbos (GNRs) and Boron Nitride nanoribbos (BNNRs) under external electric fields with different wide are studied using the Green's function method. The BNNRs are wide band gap semiconductor and they are turned into metal depending on their electric field strength. The σ(T) shows increasing in low temperature region and after reaching the maximum value, it will decrease in high temperature region. In lower temperature ranges, the electrical conductivity of the GNRs is greater than that of the BNNRs. In a low temperature region, the σ(T) of GNRs increases linearly with temperature unlike the BNNRs. The electrical conductivity are strongly dependent on the electric field strength.

  9. The predictability of large-scale wind-driven flows

    Directory of Open Access Journals (Sweden)

    A. Mahadevan

    2001-01-01

    Full Text Available The singular values associated with optimally growing perturbations to stationary and time-dependent solutions for the general circulation in an ocean basin provide a measure of the rate at which solutions with nearby initial conditions begin to diverge, and hence, a measure of the predictability of the flow. In this paper, the singular vectors and singular values of stationary and evolving examples of wind-driven, double-gyre circulations in different flow regimes are explored. By changing the Reynolds number in simple quasi-geostrophic models of the wind-driven circulation, steady, weakly aperiodic and chaotic states may be examined. The singular vectors of the steady state reveal some of the physical mechanisms responsible for optimally growing perturbations. In time-dependent cases, the dominant singular values show significant variability in time, indicating strong variations in the predictability of the flow. When the underlying flow is weakly aperiodic, the dominant singular values co-vary with integral measures of the large-scale flow, such as the basin-integrated upper ocean kinetic energy and the transport in the western boundary current extension. Furthermore, in a reduced gravity quasi-geostrophic model of a weakly aperiodic, double-gyre flow, the behaviour of the dominant singular values may be used to predict a change in the large-scale flow, a feature not shared by an analogous two-layer model. When the circulation is in a strongly aperiodic state, the dominant singular values no longer vary coherently with integral measures of the flow. Instead, they fluctuate in a very aperiodic fashion on mesoscale time scales. The dominant singular vectors then depend strongly on the arrangement of mesoscale features in the flow and the evolved forms of the associated singular vectors have relatively short spatial scales. These results have several implications. In weakly aperiodic, periodic, and stationary regimes, the mesoscale energy

  10. Systematic renormalization of the effective theory of Large Scale Structure

    International Nuclear Information System (INIS)

    Abolhasani, Ali Akbar; Mirbabayi, Mehrdad; Pajer, Enrico

    2016-01-01

    A perturbative description of Large Scale Structure is a cornerstone of our understanding of the observed distribution of matter in the universe. Renormalization is an essential and defining step to make this description physical and predictive. Here we introduce a systematic renormalization procedure, which neatly associates counterterms to the UV-sensitive diagrams order by order, as it is commonly done in quantum field theory. As a concrete example, we renormalize the one-loop power spectrum and bispectrum of both density and velocity. In addition, we present a series of results that are valid to all orders in perturbation theory. First, we show that while systematic renormalization requires temporally non-local counterterms, in practice one can use an equivalent basis made of local operators. We give an explicit prescription to generate all counterterms allowed by the symmetries. Second, we present a formal proof of the well-known general argument that the contribution of short distance perturbations to large scale density contrast δ and momentum density π(k) scale as k 2 and k, respectively. Third, we demonstrate that the common practice of introducing counterterms only in the Euler equation when one is interested in correlators of δ is indeed valid to all orders.

  11. Quantitative approach to the topology of large-scale structure

    International Nuclear Information System (INIS)

    Gott, J.R. III; Weinberg, D.H.; Melott, A.L.; Kansas Univ., Lawrence)

    1987-01-01

    A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral bubbles. The topology of the evolved mass distribution and biased galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model. 22 references

  12. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  13. The Effect of Large Scale Salinity Gradient on Langmuir Turbulence

    Science.gov (United States)

    Fan, Y.; Jarosz, E.; Yu, Z.; Jensen, T.; Sullivan, P. P.; Liang, J.

    2017-12-01

    Langmuir circulation (LC) is believed to be one of the leading order causes of turbulent mixing in the upper ocean. It is important for momentum and heat exchange across the mixed layer (ML) and directly impact the dynamics and thermodynamics in the upper ocean and lower atmosphere including the vertical distributions of chemical, biological, optical, and acoustic properties. Based on Craik and Leibovich (1976) theory, large eddy simulation (LES) models have been developed to simulate LC in the upper ocean, yielding new insights that could not be obtained from field observations and turbulent closure models. Due its high computational cost, LES models are usually limited to small domain sizes and cannot resolve large-scale flows. Furthermore, most LES models used in the LC simulations use periodic boundary conditions in the horizontal direction, which assumes the physical properties (i.e. temperature and salinity) and expected flow patterns in the area of interest are of a periodically repeating nature so that the limited small LES domain is representative for the larger area. Using periodic boundary condition can significantly reduce computational effort in problems, and it is a good assumption for isotropic shear turbulence. However, LC is anisotropic (McWilliams et al 1997) and was observed to be modulated by crosswind tidal currents (Kukulka et al 2011). Using symmetrical domains, idealized LES studies also indicate LC could interact with oceanic fronts (Hamlington et al 2014) and standing internal waves (Chini and Leibovich, 2005). The present study expands our previous LES modeling investigations of Langmuir turbulence to the real ocean conditions with large scale environmental motion that features fresh water inflow into the study region. Large scale gradient forcing is introduced to the NCAR LES model through scale separation analysis. The model is applied to a field observation in the Gulf of Mexico in July, 2016 when the measurement site was impacted by

  14. Analysis of Large- Capacity Water Heaters in Electric Thermal Storage Programs

    Energy Technology Data Exchange (ETDEWEB)

    Cooke, Alan L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Anderson, David M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Winiarski, David W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Carmichael, Robert T. [Cadeo Group, Washington D. C. (United States); Mayhorn, Ebony T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fisher, Andrew R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-03-17

    This report documents a national impact analysis of large tank heat pump water heaters (HPWH) in electric thermal storage (ETS) programs and conveys the findings related to concerns raised by utilities regarding the ability of large-tank heat pump water heaters to provide electric thermal storage services.

  15. Large scale integration of intermittent renewable energy sources in the Greek power sector

    International Nuclear Information System (INIS)

    Voumvoulakis, Emmanouil; Asimakopoulou, Georgia; Danchev, Svetoslav; Maniatis, George; Tsakanikas, Aggelos

    2012-01-01

    As a member of the European Union, Greece has committed to achieve ambitious targets for the penetration of renewable energy sources (RES) in gross electricity consumption by 2020. Large scale integration of RES requires a suitable mixture of compatible generation units, in order to deal with the intermittency of wind velocity and solar irradiation. The scope of this paper is to examine the impact of large scale integration of intermittent energy sources, required to meet the 2020 RES target, on the generation expansion plan, the fuel mix and the spinning reserve requirements of the Greek electricity system. We perform hourly simulation of the intermittent RES generation to estimate residual load curves on a monthly basis, which are then inputted in a WASP-IV model of the Greek power system. We find that the decarbonisation effort, with the rapid entry of RES and the abolishment of the grandfathering of CO 2 allowances, will radically transform the Greek electricity sector over the next 10 years, which has wide-reaching policy implications. - Highlights: ► Greece needs 8.8 to 9.3 GW additional RES installations by 2020. ► RES capacity credit varies between 12.2% and 15.3%, depending on interconnections. ► Without institutional changes, the reserve requirements will be more than double. ► New CCGT installed capacity will probably exceed the cost-efficient level. ► Competitive pressures should be introduced in segments other than day-ahead market.

  16. Periodic cells for large-scale problem initialization

    Directory of Open Access Journals (Sweden)

    Ciantia Matteo O.

    2017-01-01

    Full Text Available In geotechnical applications the success of the discrete element method (DEM in simulating fundamental aspects of soil behaviour has increased the interest in applications for direct simulation of engineering scale boundary value problems (BVP’s. The main problem is that the method remains relatively expensive in terms of computational cost. A non-negligible part of that cost is related to specimen creation and initialization. As the response of soil is strongly dependant on its initial state (stress and porosity, attaining a specified initial state is a crucial part of a DEM model. Different procedures for controlled sample generation are available. However, applying the existing REV-oriented initialization procedures to such models is inefficient in terms of computational cost and challenging in terms of sample homogeneity. In this work a simple but efficient procedure to initialize large-scale DEM models is presented. Periodic cells are first generated with a sufficient number of particles matching a desired particle size distribution (PSD. The cells are then equilibrated at low-level isotropic stress at target porosity. Once the cell is in equilibrium, it is replicated in space in order to fill the model domain. After the domain is thus filled a few mechanical cycles are needed to re-equilibrate the large domain. The result is a large, homogeneous sample, equilibrated under prescribed stress at the desired porosity. The method is applicable to both isotropic and anisotropic initial stress states, with stress magnitude varying in space.

  17. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  18. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  19. Punishment sustains large-scale cooperation in prestate warfare

    Science.gov (United States)

    Mathew, Sarah; Boyd, Robert

    2011-01-01

    Understanding cooperation and punishment in small-scale societies is crucial for explaining the origins of human cooperation. We studied warfare among the Turkana, a politically uncentralized, egalitarian, nomadic pastoral society in East Africa. Based on a representative sample of 88 recent raids, we show that the Turkana sustain costly cooperation in combat at a remarkably large scale, at least in part, through punishment of free-riders. Raiding parties comprised several hundred warriors and participants are not kin or day-to-day interactants. Warriors incur substantial risk of death and produce collective benefits. Cowardice and desertions occur, and are punished by community-imposed sanctions, including collective corporal punishment and fines. Furthermore, Turkana norms governing warfare benefit the ethnolinguistic group, a population of a half-million people, at the expense of smaller social groupings. These results challenge current views that punishment is unimportant in small-scale societies and that human cooperation evolved in small groups of kin and familiar individuals. Instead, these results suggest that cooperation at the larger scale of ethnolinguistic units enforced by third-party sanctions could have a deep evolutionary history in the human species. PMID:21670285

  20. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... amount of training data to cover sufficient biological variability. Learning methods scaling badly with number of training data points cannot be used in such scenarios. This may restrict the usage of many powerful classifiers having excellent generalization ability. We propose a cascaded classifier which...

  1. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom

    2017-11-22

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  2. Neural Correlates of Unconsciousness in Large-Scale Brain Networks.

    Science.gov (United States)

    Mashour, George A; Hudetz, Anthony G

    2018-03-01

    The biological basis of consciousness is one of the most challenging and fundamental questions in 21st century science. A related pursuit aims to identify the neural correlates and causes of unconsciousness. We review current trends in the investigation of physiological, pharmacological, and pathological states of unconsciousness at the level of large-scale functional brain networks. We focus on the roles of brain connectivity, repertoire, graph-theoretical techniques, and neural dynamics in understanding the functional brain disconnections and reduced complexity that appear to characterize these states. Persistent questions in the field, such as distinguishing true correlates, linking neural scales, and understanding differential recovery patterns, are also addressed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , whereas Chapter 4 indicates that sugarcane outgrowers’ easy access to credit and technology and their high productivity compared to the plantation does not necessarily improve their income and asset stocks particularly when participation in outgrower schemes is mandatory, the buyer has monopsony market...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land....... On other hand, the results in Chapter 4 show that participation in a sugarcane outgrower scheme has a negative impact on households’ income and total asset stock. From the findings in Chapter 3 it can be concluded that outgrower-operated plots have higher productivity than factory-operated plantations...

  4. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  5. Probing Inflation Using Galaxy Clustering On Ultra-Large Scales

    Science.gov (United States)

    Dalal, Roohi; de Putter, Roland; Dore, Olivier

    2018-01-01

    A detailed understanding of curvature perturbations in the universe is necessary to constrain theories of inflation. In particular, measurements of the local non-gaussianity parameter, flocNL, enable us to distinguish between two broad classes of inflationary theories, single-field and multi-field inflation. While most single-field theories predict flocNL ≈ ‑5/12 (ns -1), in multi-field theories, flocNL is not constrained to this value and is allowed to be observably large. Achieving σ(flocNL) = 1 would give us discovery potential for detecting multi-field inflation, while finding flocNL=0 would rule out a good fraction of interesting multi-field models. We study the use of galaxy clustering on ultra-large scales to achieve this level of constraint on flocNL. Upcoming surveys such as Euclid and LSST will give us galaxy catalogs from which we can construct the galaxy power spectrum and hence infer a value of flocNL. We consider two possible methods of determining the galaxy power spectrum from a catalog of galaxy positions: the traditional Feldman Kaiser Peacock (FKP) Power Spectrum Estimator, and an Optimal Quadratic Estimator (OQE). We implemented and tested each method using mock galaxy catalogs, and compared the resulting constraints on flocNL. We find that the FKP estimator can measure flocNL in an unbiased way, but there remains room for improvement in its precision. We also find that the OQE is not computationally fast, but remains a promising option due to its ability to isolate the power spectrum at large scales. We plan to extend this research to study alternative methods, such as pixel-based likelihood functions. We also plan to study the impact of general relativistic effects at these scales on our ability to measure flocNL.

  6. High Fidelity Simulations of Large-Scale Wireless Networks

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Benz, Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  7. Large-eddy simulation of very-large-scale motions in atmospheric boundary-layer flows

    Science.gov (United States)

    Fang, Jiannong; Porté-Agel, Fernando

    2015-04-01

    In the last few decades, laboratory experiments and direct numerical simulations of turbulent boundary layers, performed at low to moderate Reynolds numbers, have found very-large-scale motions (VLSMs) in the logarithmic and outer regions. The size of VLSMs was found to be 10-20 times as large as the boundary-layer thickness. Recently, few studies based on field experiments examined the presence of VLSMs in neutral atmospheric boundary-layer flows, which are invariably at very high Reynolds numbers. Very large scale structures similar to those observed in laboratory-scale experiments have been found and characterized. However, it is known that field measurements are more challenging than laboratory-based measurements, and can lack resolution and statistical convergence. Such challenges have implications on the robustness of the analysis, which may be further adversely affected by the use of Taylor's hypothesis to convert time series to spatial data. We use large-eddy simulation (LES) to investigate VLSMs in atmospheric boundary-layer flows. In order to make sure that the largest flow structures are properly resolved, the horizontal domain size is chosen to be much larger than the standard domain size. It is shown that the contributions to the resolved turbulent kinetic energy and shear stress from VLSMs are significant. Therefore, the large computational domain adopted here is essential for the purpose of investigating VLSMs. The spatially coherent structures associated with VLSMs are characterized through flow visualization and statistical analysis. The instantaneous velocity fields in horizontal planes give evidence of streamwise-elongated flow structures of low-speed fluid with negative fluctuation of the streamwise velocity component, and which are flanked on either side by similarly elongated high-speed structures. The pre-multiplied power spectra and two-point correlations indicate that the scales of these streak-like structures are very large. These features

  8. WAMS Based Intelligent Operation and Control of Modern Power System with large Scale Renewable Energy Penetration

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain

    for alternative energy systems driven by the pressure to reduce carbon emission has stimulated a renewal of interest in wind power. The combined effect of growing demand and increasing level of intermittent wind energy penetration coupled with deregulated market has pushed the power system to operate close to its......Electricity demand worldwide is growing which is mainly driven by growing industrial activities and the widening of access to consumers in the developing world. On the other hand, limitations of conventional sources of energy generation coupled with substantial financial and regulatory incentives...... to intermittent nature and lack of adequate controllability of wind generation, large scale integration of wind energy compromises the security of power system. Therefore, WAMS based security assessment has been proposed to assess the steady state and dynamic security of large scale wind integrated power system...

  9. Large-Scale Fabrication of Silicon Nanowires for Solar Energy Applications.

    Science.gov (United States)

    Zhang, Bingchang; Jie, Jiansheng; Zhang, Xiujuan; Ou, Xuemei; Zhang, Xiaohong

    2017-10-11

    The development of silicon (Si) materials during past decades has boosted up the prosperity of the modern semiconductor industry. In comparison with the bulk-Si materials, Si nanowires (SiNWs) possess superior structural, optical, and electrical properties and have attracted increasing attention in solar energy applications. To achieve the practical applications of SiNWs, both large-scale synthesis of SiNWs at low cost and rational design of energy conversion devices with high efficiency are the prerequisite. This review focuses on the recent progresses in large-scale production of SiNWs, as well as the construction of high-efficiency SiNW-based solar energy conversion devices, including photovoltaic devices and photo-electrochemical cells. Finally, the outlook and challenges in this emerging field are presented.

  10. A testing facility for large scale models at 100 bar and 3000C to 10000C

    International Nuclear Information System (INIS)

    Zemann, H.

    1978-07-01

    A testing facility for large scale model tests is in construction under support of the Austrian Industry. It will contain a Prestressed Concrete Pressure Vessel (PCPV) with hot linear (300 0 C at 100 bar), an electrical heating system (1.2 MW, 1000 0 C), a gas supply system, and a cooling system for the testing space. The components themselves are models for advanced high temperature applications. The first main component which was tested successfully was the PCPV. Basic investigation of the building materials, improvements of concrete gauges, large scale model tests and measurements within the structural concrete and on the liner from the beginning of construction during the period of prestressing, the period of stabilization and the final pressurizing tests have been made. On the basis of these investigations a computer controlled safety surveillance system for long term high pressure, high temperature tests has been developed. (author)

  11. Model Predictive Control for Flexible Power Consumption of Large-Scale Refrigeration Systems

    DEFF Research Database (Denmark)

    Shafiei, Seyed Ehsan; Stoustrup, Jakob; Rasmussen, Henrik

    2014-01-01

    A model predictive control (MPC) scheme is introduced to directly control the electrical power consumption of large-scale refrigeration systems. Deviation from the baseline of the consumption is corresponded to the storing and delivering of thermal energy. By virtue of such correspondence......, the control method can be employed for regulating power services in the smart grid. The proposed scheme contains the control of cooling capacity as well as optimizing the efficiency factor of the system, which is in general a nonconvex optimization problem. By introducing a fictitious manipulated variable......, and novel incorporation of the evaporation temperature set-point into optimization problem, the convex optimization problem is formulated within the MPC scheme. The method is applied to a simulation benchmark of large-scale refrigeration systems including several medium and low temperature cold reservoirs....

  12. A European collaboration research programme to study and test large scale base isolated structures

    International Nuclear Information System (INIS)

    Renda, V.; Verzeletti, G.; Papa, L.

    1995-01-01

    The improvement of the technology of innovative anti-seismic mechanisms, as those for base isolation and energy dissipation, needs of testing capability for large scale models of structures integrated with these mechanisms. These kind experimental tests are of primary importance for the validation of design rules and the setting up of an advanced earthquake engineering for civil constructions of relevant interest. The Joint Research Centre of the European Commission offers the European Laboratory for Structural Assessment located at Ispra - Italy, as a focal point for an international european collaboration research programme to test large scale models of structure making use of innovative anti-seismic mechanisms. A collaboration contract, opened to other future contributions, has been signed with the national italian working group on seismic isolation (Gruppo di Lavoro sull's Isolamento Sismico GLIS) which includes the national research centre ENEA, the national electricity board ENEL, the industrial research centre ISMES and producer of isolators ALGA. (author). 3 figs

  13. Large scale obscuration and related climate effects open literature bibliography

    International Nuclear Information System (INIS)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ''Nuclear Winter Controversy'' in the early 1980's. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest

  14. Large-scale biophysical evaluation of protein PEGylation effects

    DEFF Research Database (Denmark)

    Vernet, Erik; Popa, Gina; Pozdnyakova, Irina

    2016-01-01

    PEGylation is the most widely used method to chemically modify protein biopharmaceuticals, but surprisingly limited public data is available on the biophysical effects of protein PEGylation. Here we report the first large-scale study, with site-specific mono-PEGylation of 15 different proteins...... and characterization of 61 entities in total using a common set of analytical methods. Predictions of molecular size were typically accurate in comparison with actual size determined by size-exclusion chromatography (SEC) or dynamic light scattering (DLS). In contrast, there was no universal trend regarding the effect...

  15. Enabling Large-Scale Biomedical Analysis in the Cloud

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lin

    2013-01-01

    Full Text Available Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.

  16. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob

    2013-01-01

    necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...... optimization decision. A multi-objective PSO (MOPSO) algorithm was introduced to solve this optimization problem, which can accelerate the convergence and guarantee the diversity of Pareto-optimal front set as well. The feasibility and effectiveness of the proposed bi-level planning approach and the MOPSO...

  17. Large scale obscuration and related climate effects open literature bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  18. Current status of large-scale cryogenic gravitational wave telescope

    International Nuclear Information System (INIS)

    Kuroda, K; Ohashi, M; Miyoki, S; Uchiyama, T; Ishitsuka, H; Yamamoto, K; Kasahara, K; Fujimoto, M-K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Nagano, S; Tsunesada, Y; Zhu, Zong-Hong; Shintomi, T; Yamamoto, A; Suzuki, T; Saito, Y; Haruyama, T; Sato, N; Higashi, Y; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Aso, Y; Ueda, K-I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Tagoshi, H; Nakamura, T; Sasaki, M; Tanaka, T; Oohara, K; Takahashi, H; Miyakawa, O; Tobar, M E

    2003-01-01

    The large-scale cryogenic gravitational wave telescope (LCGT) project is the proposed advancement of TAMA, which will be able to detect the coalescences of binary neutron stars occurring in our galaxy. LCGT intends to detect the coalescence events within about 240 Mpc, the rate of which is expected to be from 0.1 to several events in a year. LCGT has Fabry-Perot cavities of 3 km baseline and the mirrors are cooled down to a cryogenic temperature of 20 K. It is planned to be built in the underground of Kamioka mine. This paper overviews the revision of the design and the current status of the R and D

  19. Optimization of large scale food production using Lean Manufacturing principles

    DEFF Research Database (Denmark)

    Engelund, Eva Høy; Friis, Alan; Breum, Gitte

    2009-01-01

    This paper discusses how the production principles of Lean Manufacturing (Lean) can be applied in a large-scale meal production. Lean principles are briefly presented, followed by a field study of how a kitchen at a Danish hospital has implemented Lean in the daily production. In the kitchen...... not be negatively affected by the rationalisation of production procedures. The field study shows that Lean principles can be applied in meal production and can result in increased production efficiency and systematic improvement of product quality without negative effects on the working environment. The results...

  20. Inflation in de Sitter spacetime and CMB large scale anomaly

    Science.gov (United States)

    Zhao, Dong; Li, Ming-Hua; Wang, Ping; Chang, Zhe

    2015-09-01

    The influence of cosmological constant-type dark energy in the early universe is investigated. This is accommodated by a new dispersion relation in de Sitter spacetime. We perform a global fit to explore the cosmological parameter space by using the CosmoMC package with the recently released Planck TT and WMAP polarization datasets. Using the results from the global fit, we compute a new CMB temperature-temperature (TT) spectrum. The obtained TT spectrum has lower power compared with that based on the ACDM model at large scales. Supported by National Natural Science Foundation of China (11375203)

  1. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  2. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  3. Learning a Large Scale of Ontology from Japanese Wikipedia

    Science.gov (United States)

    Tamagawa, Susumu; Sakurai, Shinya; Tejima, Takuya; Morita, Takeshi; Izumi, Noriaki; Yamaguchi, Takahira

    Here is discussed how to learn a large scale of ontology from Japanese Wikipedia. The learned ontology includes the following properties: rdfs:subClassOf (IS-A relationship), rdf:type (class-instance relationship), owl:Object/DatatypeProperty (Infobox triple), rdfs:domain (property domain), and skos:altLabel (synonym). Experimental case studies show us that the learned Japanese Wikipedia Ontology goes better than already existing general linguistic ontologies, such as EDR and Japanese WordNet, from the points of building costs and structure information richness.

  4. Computational Approach to large Scale Process Optimization through Pinch Analysis

    Directory of Open Access Journals (Sweden)

    Nasser Al-Azri

    2015-08-01

    Full Text Available Since its debut in the last quarter of the twentieth century, pinch technology has become an efficient tool for efficient and cost-effective engineering process design. This method allows the integration of mass and heat streams in such a way that minimizes waste and external purchase of mass and utilities. Moreover, integrating process streams internally will minimize fuel consumption and hence carbon emission to the atmosphere. This paper discusses a programmable approach to the design of mass and heat exchange networks that can be used easily for large scale engineering processes.

  5. Efficient Selection of Multiple Objects on a Large Scale

    DEFF Research Database (Denmark)

    Stenholt, Rasmus

    2012-01-01

    The task of multiple object selection (MOS) in immersive virtual environments is important and still largely unexplored. The diffi- culty of efficient MOS increases with the number of objects to be selected. E.g. in small-scale MOS, only a few objects need to be simultaneously selected. This may...... consuming. Instead, we have implemented and tested two of the existing approaches to 3-D MOS, a brush and a lasso, as well as a new technique, a magic wand, which automati- cally selects objects based on local proximity to other objects. In a formal user evaluation, we have studied how the performance...

  6. Infrastructure and interfaces for large-scale numerical software.

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, L.; Gropp, W. D.; Hovland, P. D.; McInnes, L. C.; Smith, B. F.

    1999-06-10

    The complexity of large-scale scientific simulations often necessitates the combined use of multiple software packages developed by different groups in areas such as adaptive mesh manipulations, scalable algebraic solvers, and optimization. Historically, these packages have been combined by using custom code. This practice inhibits experimentation with and comparison of multiple tools that provide similar functionality through different implementations. The ALICE project, a collaborative effort among researchers at Argonne National Laboratory, is exploring the use of component-based software engineering to provide better interoperability among numerical toolkits. They discuss some initial experiences in developing an infrastructure and interfaces for high-performance numerical computing.

  7. A Modeling & Simulation Implementation Framework for Large-Scale Simulation

    Directory of Open Access Journals (Sweden)

    Song Xiao

    2012-10-01

    Full Text Available Classical High Level Architecture (HLA systems are facing development problems for lack of supporting fine-grained component integration and interoperation in large-scale complex simulation applications. To provide efficient methods of this issue, an extensible, reusable and composable simulation framework is proposed. To promote the reusability from coarse-grained federate to fine-grained components, this paper proposes a modelling & simulation framework which consists of component-based architecture, modelling methods, and simulation services to support and simplify the process of complex simulation application construction. Moreover, a standard process and simulation tools are developed to ensure the rapid and effective development of simulation application.

  8. Large-Scale Experiments in a Sandy Aquifer in Denmark

    DEFF Research Database (Denmark)

    Jensen, Karsten Høgh; Bitsch, Karen Bue; Bjerg, Poul Løgstrup

    1993-01-01

    A large-scale natural gradient dispersion experiment was carried out in a sandy aquifer in the western part of Denmark using tritium and chloride as tracers. For both plumes a marked spreading was observed in the longitudinal direction while the spreading in the transverse horizontal and transverse...... vertical directions was very small. The horizontal transport parameters of the advection-dispersion equation were investigated by applying an optimization model to observed breakthrough curves of tritium representing depth averaged concentrations. No clear trend in dispersion parameters with travel...

  9. Large-scale sodium spray fire code validation (SOFICOV) test

    International Nuclear Information System (INIS)

    Jeppson, D.W.; Muhlestein, L.D.

    1985-01-01

    A large-scale, sodium, spray fire code validation test was performed in the HEDL 850-m 3 Containment System Test Facility (CSTF) as part of the Sodium Spray Fire Code Validation (SOFICOV) program. Six hundred fifty eight kilograms of sodium spray was sprayed in an air atmosphere for a period of 2400 s. The sodium spray droplet sizes and spray pattern distribution were estimated. The containment atmosphere temperature and pressure response, containment wall temperature response and sodium reaction rate with oxygen were measured. These results are compared to post-test predictions using SPRAY and NACOM computer codes

  10. Reliability Calculation of Large-scale Complex Initiation Network

    Science.gov (United States)

    Li, Xinjian; Yang, Jun; Yan, Bingqiang; Zheng, Xiao

    2018-02-01

    A method was proposed to calculate the reliability of bundle-series compound initiation network which was the widely used for large-scale demolition blasting in China. The network was defined reliable only when all the 2nd level Nonel detonator joints outside the blasting holes were initiated. Based on the definition a series of equations were inferred to calculate the reliability of the complex initiation network. A program is written by Matlab to solve the equations. The method showed good performance with much less computations compared to the traditional ones.

  11. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred

    2010-08-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one billion elements. We investigate communication protocols for the GPU cluster to compensate for the slow Gigabit Ethernet network between the GPU compute nodes and to maintain overall efficiency. A diesel engine intake-port and a nozzle, meshed in different resolutions, give good real world examples for the scalability tests on the GPU cluster. © 2010 IEEE.

  12. Large-Scale Spray Releases: Additional Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  13. Challenges and options for large scale integration of wind power

    International Nuclear Information System (INIS)

    Tande, John Olav Giaever

    2006-01-01

    Challenges and options for large scale integration of wind power are examined. Immediate challenges are related to weak grids. Assessment of system stability requires numerical simulation. Models are being developed - validation is essential. Coordination of wind and hydro generation is a key for allowing more wind power capacity in areas with limited transmission corridors. For the case study grid depending on technology and control the allowed wind farm size is increased from 50 to 200 MW. The real life example from 8 January 2005 demonstrates that existing marked based mechanisms can handle large amounts of wind power. In wind integration studies it is essential to take account of the controllability of modern wind farms, the power system flexibility and the smoothing effect of geographically dispersed wind farms. Modern wind farms contribute to system adequacy - combining wind and hydro constitutes a win-win system (ml)

  14. Towards large-scale plasma-assisted synthesis of nanowires

    Science.gov (United States)

    Cvelbar, U.

    2011-05-01

    Large quantities of nanomaterials, e.g. nanowires (NWs), are needed to overcome the high market price of nanomaterials and make nanotechnology widely available for general public use and applications to numerous devices. Therefore, there is an enormous need for new methods or routes for synthesis of those nanostructures. Here plasma technologies for synthesis of NWs, nanotubes, nanoparticles or other nanostructures might play a key role in the near future. This paper presents a three-dimensional problem of large-scale synthesis connected with the time, quantity and quality of nanostructures. Herein, four different plasma methods for NW synthesis are presented in contrast to other methods, e.g. thermal processes, chemical vapour deposition or wet chemical processes. The pros and cons are discussed in detail for the case of two metal oxides: iron oxide and zinc oxide NWs, which are important for many applications.

  15. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  16. Magnetic Properties of Large-Scale Nanostructured Graphene Systems

    DEFF Research Database (Denmark)

    Gregersen, Søren Schou

    The on-going progress in two-dimensional (2D) materials and nanostructure fabrication motivates the study of altered and combined materials. Graphene—the most studied material of the 2D family—displays unique electronic and spintronic properties. Exceptionally high electron mobilities, that surpass...... those in conventional materials such as silicon, make graphene a very interesting material for high-speed electronics. Simultaneously, long spin-diffusion lengths and spin-life times makes graphene an eligible spin-transport channel. In this thesis, we explore fundamental features of nanostructured...... graphene systems using large-scale modeling techniques. Graphene perforations, or antidots, have received substantial interest in the prospect of opening large band gaps in the otherwise gapless graphene. Motivated by recent improvements of fabrication processes, such as forming graphene antidots and layer...

  17. A large-scale evaluation of computational protein function prediction.

    Science.gov (United States)

    Radivojac, Predrag; Clark, Wyatt T; Oron, Tal Ronnen; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kaßner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Boehm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas A; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-03-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools.

  18. Safeguarding of large scale reprocessing and MOX plants

    International Nuclear Information System (INIS)

    Howsley, R.; Burrows, B.; Longevialle, H. de; Kuroi, H.; Izumi, A.

    1997-01-01

    In May 97, the IAEA Board of Governors approved the final measures of the ''93+2'' safeguards strengthening programme, thus improving the international non-proliferation regime by enhancing the effectiveness and efficiency of safeguards verification. These enhancements are not however, a revolution in current practices, but rather an important step in the continuous evolution of the safeguards system. The principles embodied in 93+2, for broader access to information and increased physical access already apply, in a pragmatic way, to large scale reprocessing and MOX fabrication plants. In these plants, qualitative measures and process monitoring play an important role in addition to accountancy and material balance evaluations in attaining the safeguard's goals. This paper will reflect on the safeguards approaches adopted for these large bulk handling facilities and draw analogies, conclusions and lessons for the forthcoming implementation of the 93+2 Programme. (author)

  19. Large-scale demonstration of waste solidification in saltstone

    International Nuclear Information System (INIS)

    McIntyre, P.F.; Oblath, S.B.; Wilhite, E.L.

    1988-05-01

    The saltstone lysimeters are a large scale demonstration of a disposal concept for decontaminated salt solution resulting from in-tank processing of defense waste. The lysimeter experiment has provided data on the leaching behavior of large saltstone monoliths under realistic field conditions. The results also will be used to compare the effect of capping the wasteform on contaminant release. Biweekly monitoring of sump leachate from three lysimeters has continued on a routine basis for approximately 3 years. An uncapped lysimeter has shown the highest levels of nitrate and 99 Tc release. Gravel and clay capped lysimeters have shown levels equivalent to or slightly higher than background rainwater levels. Mathematical model predictions have been compared to lysimeter results. The models will be applied to predict the impact of saltstone disposal on groundwater quality. 9 refs., 5 figs., 3 tabs

  20. Large-scale transport across narrow gaps in rod bundles

    Energy Technology Data Exchange (ETDEWEB)

    Guellouz, M.S.; Tavoularis, S. [Univ. of Ottawa (Canada)

    1995-09-01

    Flow visualization and how-wire anemometry were used to investigate the velocity field in a rectangular channel containing a single cylindrical rod, which could be traversed on the centreplane to form gaps of different widths with the plane wall. The presence of large-scale, quasi-periodic structures in the vicinity of the gap has been demonstrated through flow visualization, spectral analysis and space-time correlation measurements. These structures are seen to exist even for relatively large gaps, at least up to W/D=1.350 (W is the sum of the rod diameter, D, and the gap width). The above measurements appear to compatible with the field of a street of three-dimensional, counter-rotating vortices, whose detailed structure, however, remains to be determined. The convection speed and the streamwise spacing of these vortices have been determined as functions of the gap size.

  1. Electrical Breakdown of Hydrogen and Helium on Subnanosecond Time Scales

    National Research Council Canada - National Science Library

    Gahl, J

    2000-01-01

    .... Therefore, electrical field emission plays a very important role in these discharges. This report also includes a theoretical determination of the effective electric field for an arbitrary electromagnetic pulse...

  2. An effective method of large scale ontology matching.

    Science.gov (United States)

    Diallo, Gayo

    2014-01-01

    We are currently facing a proliferation of heterogeneous biomedical data sources accessible through various knowledge-based applications. These data are annotated by increasingly extensive and widely disseminated knowledge organisation systems ranging from simple terminologies and structured vocabularies to formal ontologies. In order to solve the interoperability issue, which arises due to the heterogeneity of these ontologies, an alignment task is usually performed. However, while significant effort has been made to provide tools that automatically align small ontologies containing hundreds or thousands of entities, little attention has been paid to the matching of large sized ontologies in the life sciences domain. We have designed and implemented ServOMap, an effective method for large scale ontology matching. It is a fast and efficient high precision system able to perform matching of input ontologies containing hundreds of thousands of entities. The system, which was included in the 2012 and 2013 editions of the Ontology Alignment Evaluation Initiative campaign, performed very well. It was ranked among the top systems for the large ontologies matching. We proposed an approach for large scale ontology matching relying on Information Retrieval (IR) techniques and the combination of lexical and machine learning contextual similarity computing for the generation of candidate mappings. It is particularly adapted to the life sciences domain as many of the ontologies in this domain benefit from synonym terms taken from the Unified Medical Language System and that can be used by our IR strategy. The ServOMap system we implemented is able to deal with hundreds of thousands entities with an efficient computation time.

  3. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  4. On transport in formations of large heterogeneity scales

    International Nuclear Information System (INIS)

    Dagan, Gedeon

    1990-01-01

    It has been suggested that in transport through heterogeneous aquifers, the effective dispersivity increases with the travel distance, since plumes encounter heterogeneity of increasing scales. This conclusion is underlain, however, by the assumption of ergodicity. If the plume is viewed as made up of different particles, this means that these particles move independently from a statistical point of view. To satisfy ergodicity the solute body has to be of a much larger extent than heterogeneity scales. Thus, if the latter are increasing for ever and the solute body is finite, ergodicity cannot be obeyed. To demonstrate this thesis we relate to the two-dimensional heterogeneity associated with transmissivity variations in the horizontal plane. First, the effective dispersion coefficient is defined as half the rate of change of the expected value of the solute body second spatial moment relative to its centroid. Subsequently the asymptotic large time limit of dispersivity is evaluated in terms of the log transmissivity integral scale and of the dimensions of the initial solute body in the direction of mean flow and normal to it. It is shown that for a thin plume aligned with the mean flow the effective dispersivity is zero and the effect of heterogeneity is a slight and finite expansion determined solely by the solute body size. In the case of a solute body transverse to the mean flow the effective dispersivity is different from zero, but has a maximal value which is again dependent on the solute body size and not on the heterogeneity scale. It is concluded that from a theoretical standpoint and for the definition of dispersivity adopted here for non-ergodic conditions, the claim of ever-increasing dispersivity with travel distance is not valid for the scale of heterogeneity analyzed here. (Author) (21 refs., 6 figs.)

  5. A mouse model for studying large-scale neuronal networks using EEG mapping techniques.

    Science.gov (United States)

    Mégevand, Pierre; Quairiaux, Charles; Lascano, Agustina M; Kiss, Jozsef Z; Michel, Christoph M

    2008-08-15

    Human functional imaging studies are increasingly focusing on the identification of large-scale neuronal networks, their temporal properties, their development, and their plasticity and recovery after brain lesions. A method targeting large-scale networks in rodents would open the possibility to investigate their neuronal and molecular basis in detail. We here present a method to study such networks in mice with minimal invasiveness, based on the simultaneous recording of epicranial EEG from 32 electrodes regularly distributed over the head surface. Spatiotemporal analysis of the electrical potential maps similar to human EEG imaging studies allows quantifying the dynamics of the global neuronal activation with sub-millisecond resolution. We tested the feasibility, stability and reproducibility of the method by recording the electrical activity evoked by mechanical stimulation of the mystacial vibrissae. We found a series of potential maps with different spatial configurations that suggested the activation of a large-scale network with generators in several somatosensory and motor areas of both hemispheres. The spatiotemporal activation pattern was stable both across mice and in the same mouse across time. We also performed 16-channel intracortical recordings of the local field potential across cortical layers in different brain areas and found tight spatiotemporal concordance with the generators estimated from the epicranial maps. Epicranial EEG mapping thus allows assessing sensory processing by large-scale neuronal networks in living mice with minimal invasiveness, complementing existing approaches to study the neurophysiological mechanisms of interaction within the network in detail and to characterize their developmental, experience-dependent and lesion-induced plasticity in normal and transgenic animals.

  6. Techno-economic Modeling of the Integration of 20% Wind and Large-scale Energy Storage in ERCOT by 2030

    Energy Technology Data Exchange (ETDEWEB)

    Baldick, Ross; Webber, Michael; King, Carey; Garrison, Jared; Cohen, Stuart; Lee, Duehee

    2012-12-21

    This study's objective is to examine interrelated technical and economic avenues for the Electric Reliability Council of Texas (ERCOT) grid to incorporate up to and over 20% wind generation by 2030. Our specific interests are to look at the factors that will affect the implementation of both high level of wind power penetration (> 20% generation) and installation of large scale storage.

  7. Large eddy simulation of very-large-scale motions in the neutrally stratified atmospheric boundary layer

    Science.gov (United States)

    Fang, Jiannong; Porté-Agel, Fernando

    2014-05-01

    Large eddy simulation was used to investigate the very-large-scale motions (VLSM) in the neutrally stratified atmospheric boundary layer at a very high friction Reynolds number. The vertical height of the computational domain is Lz = 1000 m, which corresponds to the thickness of the boundary layer. The horizontal dimensions of the simulation domain are chosen to be Lx = 32Lz and Ly = 4Lz respectively, in order to contain a sufficient number of large-scale structures. The spatially coherent structures associated with VLSM are characterized through flow visualization and statistical analysis. The instantaneous velocity fields in streamwise/spanwise planes give evidence of streamwise-elongated zones of low speed fluid with negative streamwise velocity fluctuation, which is flanked on either side by similarly elongated high speed ones. The pre-multiplied power spectra and two-point correlations indicate that the scales of these streak-like structures are very large, up to 20Lz in the streamwise direction and Lz in the spanwise direction. These features are similar to what have been found in the logarithmic region of laboratory-scale boundary layers by direct numerical simulations and experiments conducted at low to moderate Reynolds numbers. The three dimensional correlation map and conditional average of the three components of velocity further indicate that the low-speed and high-speed regions possess the same elongated ellipsoid-like structure, which is inclined upward along the streamwise direction, and they are accompanied by counter-rotating roll modes in the cross section perpendicular to the streamwise direction. These findings are in agreement with recent observations made from field campaigns in the atmospheric boundary layer.

  8. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping

    Science.gov (United States)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.

    2016-06-01

    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  9. ANTITRUST ISSUES IN THE LARGE-SCALE FOOD DISTRIBUTION SECTOR

    Directory of Open Access Journals (Sweden)

    Enrico Adriano Raffaelli

    2014-12-01

    Full Text Available In light of the slow modernization of the Italian large-scale food distribution sector, of the fragmentation at national level, of the significant roles of the cooperatives at local level and of the alliances between food retail chains, the ICA during the recent years has developed a strong interest in this sector.After having analyzed the peculiarities of the Italian large-scale food distribution sector, this article shows the recent approach taken by the ICA toward the main antitrust issues in this sector.In the analysis of such issues, mainly the contractual relations between the GDO retailers and their suppliers, the introduction of Article 62 of Law no. 27 dated 24th March 2012 is crucial, because, by facilitating and encouraging complaints by the interested parties, it should allow the developing of normal competitive dynamics within the food distribution sector, where companies should be free to enter the market using the tools at their disposal, without undue restrictions.

  10. The combustion behavior of large scale lithium titanate battery

    Science.gov (United States)

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064

  11. Modulation of energetic coherent motions by large-scale topography

    Science.gov (United States)

    Lai, Wing; Hamed, Ali M.; Troolin, Dan; Chamorro, Leonardo P.

    2016-11-01

    The distinctive characteristics and dynamics of the large-scale coherent motions induced over 2D and 3D large-scale wavy walls were explored experimentally with time-resolved volumetric PIV, and selected wall-normal high-resolution stereo PIV in a refractive-index-matching channel. The 2D wall consists of a sinusoidal wave in the streamwise direction with amplitude to wavelength ratio a/ λx = 0.05, while the 3D wall has an additional wave in the spanwise direction with a/ λy = 0.1. The ?ow was characterized at Re 8000, based on the bulk velocity and the channel half height. The walls are such that the amplitude to boundary layer thickness ratio is a/ δ99 0.1, which resemble geophysical-like topography. Insight on the dynamics of the coherent motions, Reynolds stress and spatial interaction of sweep and ejection events will be discussed in terms of the wall topography modulation.

  12. Practical considerations for large-scale gut microbiome studies.

    Science.gov (United States)

    Vandeputte, Doris; Tito, Raul Y; Vanleeuwen, Rianne; Falony, Gwen; Raes, Jeroen

    2017-08-01

    First insights on the human gut microbiome have been gained from medium-sized, cross-sectional studies. However, given the modest portion of explained variance of currently identified covariates and the small effect size of gut microbiota modulation strategies, upscaling seems essential for further discovery and characterisation of the multiple influencing factors and their relative contribution. In order to guide future research projects and standardisation efforts, we here review currently applied collection and preservation methods for gut microbiome research. We discuss aspects such as sample quality, applicable omics techniques, user experience and time and cost efficiency. In addition, we evaluate the protocols of a large-scale microbiome cohort initiative, the Flemish Gut Flora Project, to give an idea of perspectives, and pitfalls of large-scale faecal sampling studies. Although cryopreservation can be regarded as the gold standard, freezing protocols generally require more resources due to cold chain management. However, here we show that much can be gained from an optimised transport chain and sample aliquoting before freezing. Other protocols can be useful as long as they preserve the microbial signature of a sample such that relevant conclusions can be drawn regarding the research question, and the obtained data are stable and reproducible over time. © FEMS 2017.

  13. Development of large-scale functional brain networks in children.

    Directory of Open Access Journals (Sweden)

    Kaustubh Supekar

    2009-07-01

    Full Text Available The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y and 22 young-adults (ages 19-22 y. Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.

  14. EFT of large scale structures in redshift space

    Science.gov (United States)

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun

    2018-03-01

    We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.

  15. IP over optical multicasting for large-scale video delivery

    Science.gov (United States)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  16. Statistical Analysis of Large-Scale Structure of Universe

    Science.gov (United States)

    Tugay, A. V.

    While galaxy cluster catalogs were compiled many decades ago, other structural elements of cosmic web are detected at definite level only in the newest works. For example, extragalactic filaments were described by velocity field and SDSS galaxy distribution during the last years. Large-scale structure of the Universe could be also mapped in the future using ATHENA observations in X-rays and SKA in radio band. Until detailed observations are not available for the most volume of Universe, some integral statistical parameters can be used for its description. Such methods as galaxy correlation function, power spectrum, statistical moments and peak statistics are commonly used with this aim. The parameters of power spectrum and other statistics are important for constraining the models of dark matter, dark energy, inflation and brane cosmology. In the present work we describe the growth of large-scale density fluctuations in one- and three-dimensional case with Fourier harmonics of hydrodynamical parameters. In result we get power-law relation for the matter power spectrum.

  17. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    Second generation biorefineries transform agricultural wastes into biochemicals with higher added value, e.g. bioethanol, which is thought to become a primary component in liquid fuels [1]. Extensive endeavors have been conducted to make the production process feasible on a large scale, and recen......Second generation biorefineries transform agricultural wastes into biochemicals with higher added value, e.g. bioethanol, which is thought to become a primary component in liquid fuels [1]. Extensive endeavors have been conducted to make the production process feasible on a large scale......-time monitoring. The Inbicon biorefinery converts wheat straw into bioethanol utilizing steam, enzymes, and genetically modified yeast. The biomass is first pretreated in a steam pressurized and continuous thermal reactor where lignin is relocated, and hemicellulose partially hydrolyzed such that cellulose...... becomes more accessible to enzymes. The biorefinery is integrated with a nearby power plant following the Integrated Biomass Utilization System (IBUS) principle for reducing steam costs [4]. During the pretreatment, by-products are also created such as organic acids, furfural, and pseudo-lignin, which act...

  18. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    Science.gov (United States)

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-01-01

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  19. Network placement optimization for large-scale distributed system

    Science.gov (United States)

    Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng

    2018-01-01

    The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.

  20. Glass badge dosimetry system for large scale personal monitoring

    International Nuclear Information System (INIS)

    Norimichi Juto

    2002-01-01

    Glass Badge using silver activated phosphate glass dosemeter was specially developed for large scale personal monitoring. And dosimetry systems such as an automatic leader and a dose equipment calculation algorithm were developed at once to achieve reasonable personal monitoring. In large scale personal monitoring, both of precision for dosimetry and confidence for lot of personal data handling become very important. The silver activated phosphate glass dosemeter has basically excellent characteristics for dosimetry such as homogeneous and stable sensitivity, negligible fading and so on. Glass Badge was designed to measure 10 keV - 10 MeV range of photon. 300 keV - 3 MeV range of beta, and 0.025 eV - 15 MeV range of neutron by included SSNTD. And developed Glass Badge dosimetry system has not only these basic characteristics but also lot of features to keep good precision for dosimetry and data handling. In this presentation, features of Glass Badge dosimetry systems and examples for practical personal monitoring systems will be presented. (Author)